title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
A Bayesian Take on Gaussian Process Networks
Accept (poster)
Summary: Gaussian Process (GP) networks are directed graphical models for continuous data, where the function mapping from parent node values to parameters of child node is a Gaussian process. Given the graphical network structure and a dataset of observations, learning the GP model for each node simply reduces to a standard supervised GP regression using parents nodes as input $x$ values and child node as output $y$ values. Likewise marginal likelihood evaluation and hyperparameter optimization/marginalization can use the standard methods for GP regression. In this work, as I understand, the authors goal is to marginalise over network structures, each network structure yields a different set of parents-child links and each one requires marginalizing GP hyperparameters again, if done naively, this is very very expensive. The authors thus propose methods to speed up this process hence making network structure marginalisation much more computationally efficient. These methods include - using Metropolis Hasting like algorithms to make small steps through the space of graphs thereby removing the need for excessive recomputation. - for each graph in the Metropolis Hastings proposal, the authors use a Laplace approximation to marginalize GP hyperparameters instead of full MCMC marginalisation. - once a dataset of sampled graphs is collected, the expensive MCMC marginalisation is used to weight each sampled graph with importance sampling. Experiments on both synthetic toy example and a protein interaction network example are presented. Strengths: - I feel that the proposed methods are very standard approaches and I feel are well justified in this use case and the overall model and algorithm is is well designed. - personally coming from a traditional GP background, I felt this paper was accessible and easy to digest. Weaknesses: - Everything from line 101 up until line 159 is focused on model fitting and hyperparameter marginalization for a single GP and, as far as I can tell, has nothing to do with networks, hence the section title felt a bit misleading and rearranging material might make this easier. Equations 8 - 12 could hypothetically be written using $x, y$ instead of $Pa_{X}, X$ and moved to Section 2. Then section 3 could contain a description about how graph structure determines each nodes "local GP training set" and how the model is a summation over GPs over individual training sets. and the fact that there are so many justifies a hyper-parameter approximation. - Equation (10) appears to be a standard marginal likelihood with an extra layer of marginalisation over kernel hyperparameters (with or without hyperpriors), this is standard practice in GP regression [Code here](https://github.com/rmgarnett/gpml_extensions/) and [this paper](https://proceedings.neurips.cc/paper/2010/hash/4d5b995358e7798bc7e9d9db83c612a5-Abstract.html). The authors cite bridge sampling which I am not personally familiar with, but there exist many, many algorithms to solve this problem (this is not a weakness of the paper as any MCMC algorithm will suffice, but at first glance seems a rather strange choice) - In Section 4, the authors repeatedly state that $\lambda =0$ removes higher frequencies and is a linear case and so $w_{i,0}$ is the only non-zero coefficient. If I take equation 14 and make the suggested substitution, I have a constant with noise $$ X_i = \sum_{Z\in Pa_{X_i}} \beta_i w_{i,0} + \epsilon_i $$ Which implies no dependency between $X$ and $Z$. (The only way I could see a linear relationship between $X$ and $Z$ is if $\sin(jZ)$ for very small $j$ where $\sin()$ is approximately linear near the origin, but given normalised $Z$ and $j=1$ this isn't likely). Hence the $\lambda=0$ case is generating data that ignores graph structure, and the hamming distance is meaningless as the models will learn the empty graph from the data. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - as mentioned above regarding $\lambda=0$, is my understanding correct? Can the authors comment on this? - Figure 1: I assume the graph hamming distance shows high "tight" the posterior over graphs is clustered around the true graph, tighter is better, whereas with 100 observations in 10D space, this seems very sparse dataset to learn any complex model and I would expect the posterior to be rather broad. What is the hamming distance of uniformly randomly generated graphs? What is the average hamming distance of the "true" posterior from the true generating graph? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - the authors provide an explanation of the Score equivalence issue (which hadn't occurred to me until it was mentioned) Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their constructive comments and suggestions. * *as mentioned above regarding $\lambda=0$, is my understanding correct? Can the authors comment on this?* There is indeed a typo in equation (14), the linear terms should have been included and the index $j$ of the second sum should start at 1 instead: $$ X_i \\, = \sum_{Z \\, \in \\, \textrm{Pa}_{X_i}} \beta_0 \\, w\_{i,0} \\,Z \\,+\\, \\left\\{\sum\_{j=1}^6 \beta_i \\,v\_{i,j} \sin{(j Z)} \\,+\\, \beta_i\\, w\_{i,j} \cos{(j Z)} \\right\\} \\,+\\, \epsilon_i. $$ It was however coded correctly in the experiments. We thank the reviewer for this careful observation, and will update equation (14) in the manuscript. *** * *Figure 1: I assume the graph hamming distance shows high "tight" the posterior over graphs is clustered around the true graph, tighter is better, whereas with 100 observations in 10D space, this seems very sparse dataset to learn any complex model and I would expect the posterior to be rather broad. What is the hamming distance of uniformly randomly generated graphs? What is the average hamming distance of the "true" posterior from the true generating graph?* Tighter is indeed better, but only up to a point, because as noted the posterior is spread and its average distance is non-zero. To check convergence to the true posterior we need measures like the K-L divergence in Figure 2 of the main text, but this requires enumerating all DAGs and computing their scores. The current best complexity for exact computations scales like 3^(# of nodes) times the score computation time, and is only feasible for small graphs (as in Figure 2). The number of DAGs with 10 nodes is over 4 quintillion, making it infeasible to compute the exact E-SHD of the true posterior distribution. However, since we have samples from the posterior, the “GP partition” results in Figure 1 of the main text are Monte Carlo estimates of the average Hamming distance between the true posterior and the true generating graph (and the “BGe partition” for a linear-Gaussian model). Exact computations for the uniform distribution over DAGs are cheaper (there is no score component) and MC estimates are simple, so this could be added, but as an average DAG with 10 nodes has more than 20 edges, the distances would, as we might expect, be high compared to the scale of the figure. *** * *the authors provide an explanation of the Score equivalence issue (which hadn't occurred to me until it was mentioned)* We’re glad the reviewer appreciated this aspect, as it is often overlooked. It is unclear to us why this is a limitation though.
Summary: This paper proposes a Bayesian structure learning of the GPNs framework that is claimed to be less computationally. To address this, the approach presented in this work utilizes Monte Carlo and importance sampling to sample from the posterior distribution of network structures. This approach compares models using their marginal likelihood and computes the posterior probability of GPN features. Some simulations have been discussed. Strengths: - This paper studies a rather important problem. - It is easy to follow the thoughts. Weaknesses: - The contribution of the paper is not clearly justified and seems moderate at best. It is a combination of a few approaches (Gaussian processes, Laplace approximation, and Importance sampling). - The claims of the paper are not completely justified. It provides an approach but does not discuss how the main concerns are addressed. It is not clear how efficient this method is relative to others. - Some references are missing, for instance: * Efficient and scalable structure learning for Bayesian networks: algorithms and applications by Zhu 2020 * Efficient structure learning of Bayesian networks using constraints by de Campos * MCMC algorithms for Bayesian analysis of phylogenetic trees - Lack of comparison/validation to other state-of-the-art algorithms and also lack of discussion on how efficient this framework is by providing computational intensity analysis. - Lack of theoretical discussion regarding the convergence and issues rising from Laplace approximation and Importance sampling. - Lack of validation, sensitivity analysis and comparison to alternative models. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: A few questions that come to mind: - Laplace approxiamtion is sensitive to the MAP estiamte. as the score s are being approximated this may cause poor approxiamtion and bias and thus it may impact the perfomrance of the subsequenct imporatnce sampling and causing high variance of imporance weights and thus inefficient sampling and inaccurate estmation. could the paper elaborate on this issue. - Imporatnce sampling can introduce bias if the proposal does not adequetly cover the suppot of the target posterior. Also, high variance of the weights can lead to unstable estiamtes. - Althogh use of Lapace approxiamtion and imporatance sampling may in general reduce the computaional intensity but overall performance dependes on model and the size of dataset (specifically when approxiamting the score and so on). It is well-known that as dimension of the netwrok increases or dataset grows larger the computational cost of Laplace approxiamtion and imporatnece samling can be significant. How is the relationship between effective sample size, variabaility of the weights and growth of the netwrok, dataset, and dimentionality justified? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: - Laplace approximation is only valid if the distribution is Gaussian or close to Gaussian. This assumption can simply be violated. - Lack of discussion on computational intensity and how this framework overcomes those known computational issues. - Lack of validation, sensitivity analysis and comparison to alternative models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their comments, and acknowledge the time they took to assess the paper. * *The contribution of the paper is not clearly justified and seems moderate at best. It is a combination of a few approaches (Gaussian processes, Laplace approximation, and Importance sampling).* The objective of our work is addressing an important problem, as recognised also in the review, indeed drawing on technical tools originating in different domains of the statistical inference literature and we combine them effectively in a novel whole. The Bayesian approach accounts for uncertainty both in the parameters and structure, allowing us to compute posterior probabilities of network features in a principled manner. *** * *The claims of the paper are not completely justified. [...] It is not clear how efficient this method is relative to others.* The supplementary material already included analyses of the run-times illustrating the computational costs of the procedure, which are expected to be higher compared to simpler methods. The added value is that this is the only method correctly targeting the posterior distribution (as illustrated for example in Figure 2 in the main text). *** * *Some references are missing, for instance [...]* The references suggested are not entirely pertinent to the topic covered in the paper. The first two references do not qualify as Bayesian methods, since they do not provide a posterior distribution over DAGs and/or do not compute the marginal likelihood of the data, opting for a generic score function instead. The last reference concerns phylogenetic trees and is not directly related to our problem of Bayesian network structure learning. However, we agree that to better embed the work in the literature, adding some text on non-Bayesian point estimate approaches, and then modifying the current sentence to better emphasize the Bayesian side would work well, and we will modify accordingly. *** * *Lack of comparison/validation to other state-of-the-art algorithms and also lack of discussion on how efficient this framework is by providing computational intensity analysis.* There are few methods that currently are able to perform Bayesian structure learning for BNs with non-linear, continuous data. Again, as for the references, we have to distinguish between Bayesian and non-Bayesian approaches. We of course welcome concrete suggestions for other algorithms to include. As for the computational intensity analysis, we already examined this in Figures A.3, A.5 and A.8 in the supplementary material. *** * *Lack of validation, sensitivity analysis and comparison to alternative models.* We compared the performance of our method to five different state-of-the-art algorithms, and provided a comparison of run-times in Figures A.3, A.5 and A.8 in the supplementary material. Furthermore, we have now added new figures in the additional pdf in the global response, which include an analysis of our method’s performance for different choices in the prior hyperparameters. *** * *Laplace approximation is only valid if the distribution is Gaussian or close to Gaussian. This assumption can simply be violated.* The Laplace approximation is indeed only exact if the posterior distribution is Gaussian; we are however not relying directly on this approximation to compute the marginal likelihoods. We are using the Laplace approximation as a proposal distribution for subsequent importance sampling. The exact posterior is then computed by marginalizing the likelihood with respect to the prior using bridge sampling. The results so obtained finally provide valid samples. *** * *Laplace approxiamtion is sensitive to the MAP estiamte. as the score s are being approximated this may cause poor approxiamtion and bias and thus it may impact the perfomrance of the subsequenct imporatnce sampling and causing high variance of imporance weights and thus inefficient sampling and inaccurate estmation.* * *Imporatnce sampling can introduce bias if the proposal does not adequetly cover the suppot of the target posterior. Also, high variance of the weights can lead to unstable estiamtes.* The same Laplace approximation is used both in the target for graph sampling and in the importance sampling. Even if the approximation is poor, these terms “cancel” out and do not induce bias, though a poor approximation may increase variance. A high variance in the weights will only be observed when there is a large discrepancy between the proposal and true posterior. Using a Laplace approximation as a proposal distribution for importance sampling is commonplace in the literature (see e.g. Kuk, 1998 or Bek et al., 2018) since it provides a reasonable approximation of the target distribution. There are also results showing convergence in Hellinger distance of the Laplace approximation to the posterior (Schillings et al., 2020). *** * *It is well-known that as dimension of the netwrok increases or dataset grows larger the computational cost of Laplace approxiamtion and imporatnece samling can be significant. How is the relationship between effective sample size, variabaility of the weights and growth of the netwrok, dataset, and dimentionality justified?* We are unfortunately not entirely sure what is meant with this comment, but we will be happy to discuss during the discussion period. *** Given the response comments above, and that many of the concerns were already addressed in the original submission, we would appreciate a reconsideration of the perceived weaknesses of the paper and hopefully an adjustment of the score accordingly. --- Rebuttal Comment 1.1: Comment: I extend my appreciation to the authors for their response. Despite recognizing the significance of the problem, my stance remains unchanged—I believe that this work lacks the requisite novelty to warrant publication at NeurIPS, as some of my initial concerns persist. Consequently, I maintain my original score.
Summary: The paper proposes methodology to perform Bayesian inference on the (hyper) parameters of a so-called Gaussian Process Networks (GPNs), which are sets of functional equations with Gaussian Process (GP) priors on the functions relating a variable to its parents, as well as inference on the graph structure and graph posterior expectations. Previous works in the GPN literature that consider a posterior over graphs did not put priors over hyperparameters of the GP kernels and perform associated posterior inference. To address the challenges, the authors crucially exploit the decomposability property of p(G | D): " scores " (conditionals of X_i given PA_i ) only need to be recomputed if the parents have changed, so that scores can be reused across different graphs. Since scores are intractable integrals over (hyper) parameters of the GPs, the authors use bridge sampling to approximate the integrals, and make use of Laplace approximations in a two-step procedure to reduce computational cost compared to running MCMC directly with the estimated scores. The methodology is evaluated on a good set of synthetic and real-data examples. Strengths: Overall a technically solid paper on an interesting topic for the NeurIPS community. Main strengths are as follows: - GPNs are gaining more attention in particular due to their application in causal structure learning (von Kügelgen et al 2019), so the authors address a relevant problem of performing "fully" Bayesian inference with these models - The scope and objective of the paper and proposed methods are clear and targeted - The paper demonstrates an excellent knowledge of the surrounding relevant literature on MCMC for structure learning, and describes the relationship with the current work clearly - Well explained and solid experimental section Weaknesses: - [**Presentation of Section 3**] The main weakness of the paper is the presentation of the approximations in Section 3. Even as a reader well familiar with bridge sampling, Laplace approximations and MCMC, the part from lines 143 to 172 is unclear at first read and too condensed. I am guessing the final goal is to approximate p( \Phi | D) of Eq (13). Rather than start from the various approximations needed to get there, I would suggest starting from there and write down the integrals that arise, then explain how to approximate each step. Otherwise, while I'm reading I keep thinking "Why am I approximating the scores ? What is the final goal ?". Some related questions/comments about clarity of this part below. - Is the reason that you approximate the scores both **(a)** via Laplace **and** **(b)** via bridge sampling because **(a)** is used to build the proposal while **(b)** is needed to compute the (self-normalized) estimator in Eq. (13), specifically, the unnormalized target ? If so you should note that the estimator in Eq (13) has an **additional bias** ( in addition to the standard bias coming from self-normalized IS) due to the fact that the bridge sampling estimator is not unbiased. You should also state that because you are using self-normalized IS, you do not need to know the normalizing constants of either q(G|D) or p(G|D), as the ratio of these two constants cancels out. - [**Difficulty introduced by the additional marginalization**]: while usually people assume there is inherent interest in a "fully" Bayesian treatment and marginalize hyperparameters as well when feasible (in this case, kernel hyperparameters), the authors should make it clearer why it is beneficial to do so specifically in the setting of GPNs with interest in p(G|D). There is a comment about this in lines 290-293 but it is quite vague. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - How is the Gaussian proposal for the numerator of Eq. (11) chosen ? I would have expected a Laplace - IS here too ? Perhaps clarify what is the choice made here and possibly why. - Could you comment theoretically/intuition-wise/experimentally on which of these approximations are the ones that affects accuracy most and how ? For example, should we focus on getting better proposals $q(G|D)$ or in approximating the scores with MC (bridge sampling, in your case) more efficiently to approximate the target (numerator of the IS weights, in IS terminology) better ? Further, could you provide a couple of example $\Phi$ of interest already in the methodology (Section 3) for concreteness ? - The comment in lines 290-293 is important in my view, but should be expanded upon. Why/how does marginalizing the hyperparameters mitigate the bias of the Laplace approximation? Why can we not "just" not marginalize them and simply use more MCMC samples ? - Regarding score equivalency, could you clarify why it sounds like it's not good if DAGs from the same MEC obtain different scores ? Does the property make it harder to approximate the true $p(G|D)$ somehow ? - Does any Bernstein von Mises - type result hold for $p(G|D)$ ? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors do not have an explicit limitations subsection, although limitations are not hard to infer from the text, including mainly computational cost of marginalizing the hyperparameters, and the unclear effects on the performance of the approximations performed by bridge sampling; it is also not clear whether the authors can use kernels appropriate for with non-stationary functions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their constructive comments and interesting questions. * *How is the Gaussian proposal for the numerator of Eq. (11) chosen ? I would have expected a Laplace - IS here too ? Perhaps clarify what is the choice made here and possibly why.* The proposal function g() was set to a normal distribution, with its first two moments chosen to match those of the posterior distribution. This is a standard choice in the literature and the default in the bridge sampling implementation that we used (Gronau et al., 2017). We will add a clarification of this when discussing implementation details in Section 3.1. *** * *Could you comment theoretically/intuition-wise/experimentally on which of these approximations are the ones that affects accuracy most and how ? For example, should we focus on getting better proposals $q(G|D)$ or in approximating the scores with MC (bridge sampling, in your case) more efficiently to approximate the target (numerator of the IS weights, in IS terminology) better ?* This is an interesting and relevant question which is however challenging to address in all generality. Improving the proposal distribution can provide more uniform weights, leading to a more efficient exploration of the posterior and reducing variance. More accurate proposals would however require more computational power and could undermine the point of having a proposal distribution in the first place. Improving the MC estimates of the scores would on the other hand reduce the bias deriving from the bridge sampling approach (as you noted in your comments). The approximations and trade-offs used in the experiments seemed to perform quite well, though determining the best approach to perform posterior inference would be an interesting area of experimentation in this domain. *** * *Further, could you provide a couple of example $\Phi$ of interest already in the methodology (Section 3) for concreteness ?* Three examples of $\Phi$ are already provided in Section 2.1, lines 61-62. *** * *Why/how does marginalizing the hyperparameters mitigate the bias of the Laplace approximation?* The Laplace approximation is meant as an approximation of the score function, since it does not fully take into account the uncertainty regarding the hyperparameters, and only provides a good approximation if the posterior over the hyperparameters is approximately Gaussian. Marginalizing the hyperparameters on the other hand provides an accurate estimation of the score because it directly follows the definition of the score in equation (10). This process of marginalizing is however computationally costly, leading us to use the Laplace approximation in the first step of our approach. Finally we correct the approximation with importance sampling. *** * *Does any Bernstein von Mises - type result hold for $p(G|D)$ ?* While we are not aware of any such result, we agree it would be interesting if any such results could be established and an interesting line of research. *** * *Regarding score equivalency, could you clarify why it sounds like it's not good if DAGs from the same MEC obtain different scores ?* Two DAGs from the same MEC are theoretically indistinguishable from observational data. The lack of score equivalence in our case is caused by the GPN assumption, which introduces asymmetry into the factorizations of the joint distribution via the structural equation model (8). Because score equivalence is a common, and in some cases desirable, property to avoid strong distributional assumptions, we simply wished to emphasize that the identifiability of otherwise score equivalent DAGs is a consequence of the assumptions underpinning the GPN model. *** * *[...] it is also not clear whether the authors can use kernels appropriate for with non-stationary functions.* While we didn’t explicitly mention non-stationary kernels, the method is applicable to any kernel function as long as a marginal likelihood can be computed.
Summary: This paper proposes an MCMC algorithm for Bayesian inference for Gaussian Process Networks where you model the distribution of a node as a function of its parents plus noise, with the function a Gaussian process. The sampler uses a Laplace approximation to make informed moves between Networks. The sampler is demonstrated on a simulated and real example. Strengths: This is a fine paper -- which presents a sensible method for a general learning problem. It compares with some other methods and empirically gives some evidence of better performance. The presentation is generally good. Weaknesses: The ideas come across as rather incremental -- taking standard ideas and putting them together. The question then becomes how important and useful is the resulting method. Here is where I think the paper is a bit lacking. The empirical study is somewhat limited. Also, with their method there a numerous tuning parameters (i.e. the specification of the prior) -- and it is unclear how robust their results are to different choices of the priors. It would also be good to see some investigation of when and why the method works well relative to other methods. I can see that the Bayesian paradigm potentially gives advantages in terms of averaging over model uncertainty and being able to quantify uncertainty -- but these come with certain disadvantages (potential dependence on prior; extra computational cost; how reliable are measures of uncertainty when the model is incorrect). The results could be a bit clearer -- e.g. more information captions; make it clearer the E-SHD is smaller for better methods; perhaps clearer about which method is which. Is there any code available for the method? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: It would be helpful to see the robustness of the results of your method as you vary the prior. It would be good to see more thorough evaluation of the methods -- in particularly looking at understanding when the method works well/when it does not. It would be helpful to have (easy to use) code available and details of how to reproduce the results. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their comments, and acknowledge the time they took to assess the paper. * *It would be helpful to see the robustness of the results of your method as you vary the prior.* The priors we use for the hyperparameter set are standard, non-informative priors (see for instance Gramacy & Lee, 2009). Nevertheless, we agree that it would be interesting to see to what extent the results are dependent on the choice of prior. In Figure 2 of the new additional pdf in the global response we compare the results in terms of E-SHD for our GP score-based approach for different prior distributions on the length-scale hyperparameter $\theta$. In addition to the default IG(2,2) prior, we explore a IG(1,3) prior that favors higher lengthscales, as well as a IG(3,1) prior favoring lower values of $\theta$. The results show that varying the prior hyperparameter has a limited effect on the results. As for the prior over the graphs, we explore different choices when generating the ROC-like plots in Figure A.2 in the supplementary material. *** * *It would be good to see more thorough evaluation of the methods -- in particularly looking at understanding when the method works well/when it does not.* In our simulations, we look at different levels of non-linearity for specifically this purpose - comparing the performance of our approach under situations when linear models such as BGe are expected to work better. *** * *[...] make it clearer the E-SHD is smaller for better methods* We agree this could provide additional clarity, and will add a short comment under equation (15) mentioning this, though, as we previously noted in the manuscript, the E-SHD is not a direct measure of distance; the true posterior has a non-zero distance, so overly small values are not best. *** * *Is there any code available for the method?* As mentioned in lines 187-188, code to implement the method and reproduce the results of Sections 4 and 5 is publicly available. *** We hope the above comments have addressed the reviewer’s concerns and will be happy to hear back from them. --- Rebuttal Comment 1.1: Comment: Thanks for the reply and additional results. I am happy to see some evidence of robustness, and have increased my score on the paper to a marginal accept. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thanks for your positive feedback, we’re glad our response and the additional results about robustness were appreciated.
Rebuttal 1: Rebuttal: Following the reviewers' suggestions, we attach a pdf containing additional simulation results. Pdf: /pdf/94fe2a5c273f7c30e0602491cf601e6693324af0.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: Within the broader task of causal network structure identification from observational data, i.e. finding the probabilistic graphical model which best explains (has highest likelihood) observations of a set of variables, this paper focus on a type of network known as GPN. GPNs are adapted to scenarios where all variables are continuous, and are built from a simple building block in which every child random variable's distribution, conditional on its parents, is a GP regression with independent Gaussian noise. For this task and this type of network, the paper proposes a fully Bayesian inference procedure based on MC and MCMC to sample PGM structure (i.e. edge presence and direction), and compute graph-related statistics of interest (such as the probability of presence of an edge between two given variables). This procedure holds the promise of more accurately representing the graph structure distribution than score-based methods. Strengths: Originality: The paper's approach consists of "pushing through" with a strong MCMC sampling procedure for the stated problem. The problem statement is not original per se, but there is moderate inventivity in putting together the required technical steps to overcome each hurdle: speeding up eq 5 (graph posterior) thanks to eq 12 (Laplace approximation of per-variable score), importance sampling for feature posteriors in eq 13. Originality is not the paper's strong suit however. Quality: Derivations and implementations are all correct. Experimental validation on synthetic data is a necessity, especially where full enumeration of network structures is needed; the validation on the Sachs dataset is interesting, though one would wish for further real-data experiments. I appreciate that score equivalence/graph identifiability is addressed thoroughly. Evaluation methods and prior choices seem well-motivated and comparable to other literature. Clarity: The paper is exceptionally clear. Citations are almost all accurately chosen (with the exception of a few in the introduction, cf below). Significance: The paper tackles its stated problem heads-on. I believe we will see further Bayesian takes on network structure inference, maybe with smarter graph posterior approximations or even variational approaches for certain types of networks. The choice of GPN (and the particular further choices made for experimental evaluation) is rather flexible as it supports varied functional dependencies of child upon parent variables. The experimental evaluation places variants (order/partition) of the paper's methods at better or on par with BGe, and better than other methods. It is clearly superior in its sampling behaviour (fig 2). Its running time, unfortunately hidden in the supplementary material fig A8, is 30x that of BGe however, which might restrict applicability. Weaknesses: Significance, as discusse above. More real-world experiments would help assess the impact, in particular if they were analysed with the level of detail of synthetic experiments sec4. Minor errors and typos I found - line 162: perform, not make - line 208: data is generated - line 266: define CPDAG - line 289: increase, our approach - lines 24 and 29, citations seem quite cherry-picked and do not provide a variety of viewpoints as I would expect; I would hope there are no biased self-citations here. - supplementary line 479: do you mean TP instead of TN? eq 19, what does the lowercase p stand for in FPRp? - table 1: I would add the usual arrow-up, arrow-down to indicate "higher/lower is better" for easier reading Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - To make the point lines 168 ff., would it be worth counting the number of score evaluations? - Do some real-world scenarios require a different choice from the uniform prior over graph structures (line 141) ? - Table 1, rightmost column: can the poor performance of your method be explained by the fact that there is no reason to predict edge absence, when absence of correlation can be captured with a fairly wide conditional distribution, i.e. low covariance in GP? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The limitations tied to algorithm choices are well discussed. It is worth motivating better why it is reasonable to focus on GPNs with respect to real-world needs, where all variable might not be continuous, where GP conditional distributions might be too loose, or where neural-network-parameterized conditional distributions might provide a more adequate model. I see no interesting limitations to discuss related to applications or societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive comments and suggestions, and will incorporate them into the manuscript. * *Minor errors and typos I found [...] lines 24 and 29, citations seem quite cherry-picked and do not provide a variety of viewpoints as I would expect; I would hope there are no biased self-citations here. supplementary line 479: do you mean TP instead of TN? eq 19, what does the lowercase p stand for in FPRp?* Thanks for spotting the typos! The citations are based on papers performing Bayesian analyses with Bayesian networks, and while Bayesian networks are wide-spread, Bayesian inference for Bayesian networks is less so. For example, citation 25 is the only Bayesian network paper cited in a recent review of Bayesian statistics (doi.org/10.1038/s43586-020-00001-2). The lowercase p means we normalise by P rather than N so that FPs and FNs count the same in the comparisons. * *To make the point lines 168 ff., would it be worth counting the number of score evaluations?* This would indeed be an interesting additional result. Figure 1 in the additional pdf in the global response shows the number of scores that were evaluated for the same simulations of Figure 1 in the main text. In red is the number of scores computed via the Laplace approximated posterior (“Laplace approximate”), in light green the number of additional scores evaluated via bridgesampling for the partition MCMC algorithm (“Bridgesampling, partition”) and in dark green the same quantity for order MCMC. Naively running the MCMC algorithm without our importance sampling approach would require computing the number of scores in red with the computationally expensive bridge sampling method. On the other hand, our importance sampling approach requires computing via bridge sampling only the number of scores shown in light/dark green, leading to much more efficient procedure. *** * *Do some real-world scenarios require a different choice from the uniform prior over graph structures (line 141) ?* Yes, and knowledge of the underlying graph can be easily incorporated into the graph prior. For example, in the real world data example in Section 5, prior knowledge of the edges that were experimentally validated by Sachs et al. (2015) would normally be included by up- or down-weighting the edges in question. Of course, to avoid circular reasoning we didn’t use them in the experiments on the Sachs data, and in fact in all of our experiments (unless stated otherwise) we employ a uniform prior when examining the performance of our approach. For real-world analyses typically one chooses a non-uniform prior informed by the setting, and indeed the cited application references [1,25,30] all have this (using either a sparsity cut-off, information from a separate biological database, or excluding edges on the basis of temporal restrictions, respectively). *** * *Table 1, rightmost column: can the poor performance of your method be explained by the fact that there is no reason to predict edge absence, when absence of correlation can be captured with a fairly wide conditional distribution, i.e. low covariance in GP?* Edge absence implies a conditional (including marginal) independence relation between the two variables in question. However, edge absence implies neither zero correlation nor zero partial correlation in general (see for example A. Vargha, 1996). Predicting edge absence can therefore be more challenging, and we agree that the absence of an edge in the “ground truth” network could have different interpretations. In particular we find stronger evidence for an edge when we allow non-linear relationships than in the linear cases, which could be an indication of more complex dependencies. Of course we didn’t want to make too strong statements, but this aspect may be worth discussing in more detail and we could adapt accordingly. --- Rebuttal Comment 1.1: Title: Remaining concerns Comment: * On the issue of citations: to clarify, my argument is that the choice of citations in the passages I discussed stems from always the same set of authors (mainly Moffa and Kuipers), and that I see no reason for this particular selection. I have consulted the review cited by the authors in their rebuttal (doi.org/10.1038/s43586-020-00001-2) and see that indeed, [25] is the only citation this paper features on Bayesian networks -- this does not convince me that it is a good, representative choice. Bayesian network structure learning is a fairly wide topic, so it is easy to find a contrast, for example a review of Bayesian network structure learning (Kitson 2023) contains a wealth of citations from which to pick, many much more relevant that the ones proposed. To further clarify, I disagree with the three citations suggestions made by fellow reviewer sMvJ, and agree with the authors in pointing out that they are not relevant to the paper's topic. I remain concerned about the point I raised. Kitson, N.K., Constantinou, A.C., Guo, Z. et al. A survey of Bayesian Network structure learning. Artif Intell Rev 56, 8721–8814 (2023). https://doi.org/10.1007/s10462-022-10351-w * I agree with the clarity issues raised by reviewer uGD4, which has been addressed neither by the authors' response, neither by the revised paper version https://openreview.net/pdf?id=bBIHqoZ3OR. * The paper should more clearly state the proposed method's computational cost; as I and others pointed out, it seems to be relegated to Additional material and a brief mention line 320. * Overall, I feel that several of the questions raised by reviewers, beyond a discussion here, would improve the paper if incorporated; e.g. the clarification to reviewer uGD4 on on why score equivalence is/is not desirable. My other questions have been addressed. Before reconsidering my score, I will await further responses by the authors, a.o. on the point regarding choice of citations. --- Reply to Comment 1.1.1: Title: Clarification about intended amendments to the manuscript Comment: Thanks for your comment and for opening a discussion with us. For the main comment, despite their name, there is nothing inherently Bayesian about Bayesian networks, and we were trying to focus on fully Bayesian analyses with Bayesian networks. In response to sMvJ, we expressed our intention to make a clear distinction between Bayesian and non-Bayesian analyses and to better contextualise the work in the literature, a point we should have also explicitly mentioned in our response to you. To this aim, we will modify the current text to emphasise its allusion to fully Bayesian analyses. Furthermore, we will add some text and references, highlighting the different flavours of alternative approaches, explicitly touching on non-Bayesian and MAP point estimates, drawing from the examples nicely covered in the comprehensive review of Kitson et al. 2023 that you recommended (thanks for the pointer). In terms of revising the manuscript, our understanding is that the author guidelines explicitly forbid us to make changes at this stage (Quoting from the FAQ: “Can we upload a revision of our paper during the rebuttal/discussion period? No revisions are allowed until the camera-ready stage.”). Note that no text was allowed in the additional pdf, only figures/tables and captions. With expanded referencing to the wider literature of Bayesian networks and a clearer distinction between Bayesian and non-Bayesian approaches to their analyses, the current citations will make more sense. In line with the manuscript, we had taken a particularly strict standpoint where we intended both inferences over parameters and structures as Bayesian. With this understanding, among the applications quoted in the review of Kitson et al. 2023, the work of Moffa et al. 2017 is the only paper in the intersection between Bayesian analysis and Bayesian networks, with the one we cite a more recent development of the same work covering more general applications, using dynamic versions of Bayesian networks, still in a fully Bayesian sense. The other 5 applications referenced in the introduction of Kitson et al. 2023 all target a single-point estimate structure, and even if some use a Bayesian score to penalise the likelihood (since some penalisation is always required to avoid fully connected DAGs), they are not fully Bayesian since they do not account for the (posterior) uncertainty in structure. Likewise, the work of Kuipers et al. 2018 appears as the only paper in the intersection between Bayesian analysis and Bayesian networks in the Bayesian statistics review article we previously mentioned. Falling in the intersection of the two fields in two large recent surveys seems a reasonable criterion, which leads to two of the three referenced papers. With the restriction of being currently unable to share a revised manuscript in mind, we take on board further suggestions and will modify the text to focus on clarity, include the timing/computational cost more prominently in the main manuscript (there is also more space when revisions are allowed) and enhance the discussion on score equivalence. We thank the reviewer for reminding us that these are important points to further improve the manuscript and we hope we could assuage their concerns.
null
null
null
null
null
null
Marich: A Query-efficient Distributionally Equivalent Model Extraction Attack
Accept (poster)
Summary: This paper proposes a model-oblivious query-efficient black-box model extraction attack. This attack is achieved by solving a proposed distributionally equivalent and max-information model extraction problem. To solve the problem, this paper develops an active sampling-based query selection algorithm, MARICH, to select the most informative queries that simultaneously maximize the entropy and reduce the mismatch between the target and the stolen models. Experimental results validate the effectiveness and query efficiency of the proposed method. Strengths: S1: This paper proposes a novel notion called distributional equivalence extraction that can extend the existing notions of task accuracy and functionally equivalent model extractions. This paper also theoretically demonstrates that solving the developed optimization problem which correlates the distributionally equivalent with another proposed max-information extraction can simultaneously maximize the entropy and reduce the mismatch between the target and the stolen models. S2: The experiments set up in the paper basically address general doubts that arose while reading the paper, which increases the credibility of the proposed method. S3: The layout of this paper is elegant, and the overall structure and logical progression are clear. These advantages make this paper enjoyable for reading and easy to follow. Weaknesses: W1: The employed datasets which are used for training the target model being extracted are considered to be simple. W2: Although this paper repeatedly foregrounds that their approach is designed to extract publically available APIs, there is no experiment that aims to extract models deployed in real-world APIs. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Based on W2, I am wondering if you can provide practical evidence that can demonstrate that your proposed method can be used to extract real-world APIs (e.g., classifiers deployed by Amazon AWS, Google API, Microsoft Azure)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please refer to the weakness and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time spent reviewing and encouraging comments about the novelty, presentation, and experimental results. **Querying industrial black-box APIs.** For genuine financial constraints, we cannot run experiments by querying the industrial APIs. Instead, we trained our own models and used them as the black-box APIs that predict for a given query datapoint. The code is at your disposal for reproducing the black-box models attacked in this work through API-like queries. **Experiments with larger datasets.** We would like to refer the reviewer to the full paper with appendix which is provided in the supplementary materials. This contains further experiments with larger datasets, such as ImageNet, being used as the public dataset for the attacks. We hope that our response addresses the reviewer's concerns. Let us know if you have any other query. --- Rebuttal Comment 1.1: Comment: Thank you for your response. While I believe the assertion regarding the effectiveness of extracting real-world API might be slightly overstated, you have addressed most of my primary concerns. Therefore, I am currently inclined to maintain my current rating for this paper.
Summary: Given a dataset DQ and blackbox access to a model trained on a dataset DP, the authors identify a subset of the dataset DQ that can be used to train another model. They use a metric based on energy to select the samples. The authors use with model stealing/extraction attacks as their primary related work, though the work has more resemblance to the literature on coresets. In coresets, DP and DQ are the same. But, overall, the algorithm MARICH seems very similar to what would be used in the coresets work. The mismatch between DP and DQ appears to be relatively low in the experiments. For example, MNIST versus EMNIST; CIFAR 10 versus CIFAR10 (same?); and BBC News versus AGNews. Notably, the DQ datasets are about the same or larger than original datasets. It seems a better baseline for the work would be substantial work on coresets, which also aims to find an efficient subset of a dataset that can be used to train a comparable model to one that is trained on the full dataset. The reason is that DP and DQ are not that different in authors' experiments. Indeed, if trained on 100% of the dataset, the accuracy appears to be very high. Thus, it seems an adversary has little incentive to select a subset, if the primary objective is to create a high-fidelity clone. Indeed, prior work on model stealing generally aims to exceed the performance of what would be achievable with just DQ. Authors cite one of the coresets papers on that [lines 497-498, SS18]. But, in the last 5 years, there is a substantial body of work on coresets based on various metrics, including energy scores, forgetting, AUM, and Coverage-Centric core-set selection based on stratified sampling. I would have liked to see a comparison with that instead. Strengths: Entropy-based sampling is a reasonable strategy to reduce the size of the selected state. They use 3 datasets, MNIST, CIFAR10, and BBC News (on BERT). Use of BERT in the evaluation is good, since most other papers on coresets use Weaknesses: The paper does not appear to be well-positioned with respect to prior work. Experiments could be better argued. Membership Inference attack is an interesting metric to use in the paper for comparing the extracted model with the original one. Is there a privacy goal here? Why not use a straight Accuracy measure on a test dataset from the original dataset DP, as in most core-set work or model extraction work? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Compare more extensively with core-set work in both related work and in the experiments. Use metrics and datasets that are similar to those in your baseline work, so that one can more directly compare your results with those in earlier work. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: I did not see a discussion on limitations in the paper or potential negative societal impact of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for his/her feedback. We answer below to his/her concerns and comments. **Explaining the experimental design:** **1. MI attack:** A membership inference attack shows informativeness of the extracted model, while never using the private training dataset. Our experimental results validate our hypothesis that since Marich is derived from the principle of max-information attack, it extracted models yielding high MI accuracy. This is a direct violation of privacy (in MI attack sense) of the users in the private training dataset. **2. Accuracy measure:** We agree with the reviewer about the test dataset and this is exactly what we have done. We use a subset of the private dataset as the test dataset for both the target and extracted models. We mention this also in our experimental goal ``How do the accuracy of the model extracted using MARICH \textbf{on the private dataset} compare with that of the target model, and RS with same query budget?" (Line 305 and 306) **Core-set literature:** Thanks for pointing us to the core-set literature. We are aware of this growing field, and we have compared our results with the models extracted using a compatible core-set approach, such as [SS17]. We have a detailed discussion on the related active sampling strategies in Appendix C of the extended paper in the supplementary material. Here, we discuss some other interesting works in this direction. We show why they are not directly implementable in our problem. [Kil+21b] aims to identify a subset of training dataset for training a specific model. The algorithm needs white-box access to the model, and also needs to do a forward pass over the whole training dataset. Thus, the objective of our work is different. Also a white-box sampling algorithm and the assumption of being able to retrieve predictions over the whole training dataset are not feasible in our problem setup. [Kil+21a] requires access to the average loss over the training dataset or a significantly diverse validation set. Then, the gradient of loss on this training or validation set is compared with that of the selected minibatch of datapoints. In a black-box attack, we do not have access to average loss over a whole training or validation dataset. Thus, it is not feasible to deploy the proposed algorithm. [MBL20] proposes an elegant preprocessing algorithm to select a coreset of a training dataset. This selection further leads to an efficient incremental gradient based training methodology. But in our case, we sequentially query the black-box model to obtain a label for a query point and use them further for training the extracted model. Thus, creating a dataset beforehand and using them further to preprocess will not lead to an adaptive attack and also will not reduce the query budget. Thus, it is out of scope of our work. [KS22] needs an auxiliary classifier to first use a subset of labelled datapoints to create low dimensional embeddings. Then according to the query budget, it chooses the points from the sparse regions from each cluster found from the low dimensional embeddings. This is another variant of uncertainty sampling that hypothesises the points from the sparse region are more informative in terms of loss and prediction entropy. This design technique is at the same time model specific, thus incompatible to our interest, and also increases the requirement of queries to the target model. We will add this discussion in the final version of our paper and add designing core-set based algorithms as an interesting future work. **Comparison with baselines and performance metrics.** We have shown our performance on all the relevant metrics of model extraction attacks (e.g. accuracy on test dataset (App. D.1.) [MSDH19, CSBB+18, OSF19], parametric fidelity (App. D.3) and prediction matching (App D.2.2) of the extracted models [LM05, BBJP18, JCB+20]). We also add new metrics such as KL-divergence between predictive distributions of the extracted and target models as a goodness of extraction attack (Figure 3, Appendix D.2.1), and accuracy of MI attacks with extracted models as informativeness of the extracted models (Table 1, Appendix D.4). We have also compared with the previously used active learning algorithms for attacks. We would like to emphasise that this is not a work regarding generic active learning or training on a subset of dataset. The goal of this work is to design an adaptive and frugal model extraction algorithm from the first principles of distributionally equivalent/max-information extraction. Since the final algorithm is similar to an active query selection algorithm in spirit, we compare its performance with 5 other types of active selection approaches, namely ``K-Center (KC) [SS18], Least-Confidence sampling (LC) [LS06], Margin sampling (MS) [BBZ07 , JG19], Entropy Sampling (ES) [LG94], and Random Sampling (RS)". Designing better active query selection algorithm using any of these five approaches would be an interesting research work but out of the scope of this paper. Hope we have answered your comments and concerns, and also look forward to respond if you have any further query. Hope our response will convince you to raise the score. **New references** [MBL20] Baharan Mirzasoleiman, Jeff Bilmes, and Jure Leskovec. “Coresets for data-efficient training of machine learning models”. ICML 2020 [Kil+21a] Krishnateja Killamsetty et al. “Grad-match: Gradient matching based data subset selection for efficient deep model training”. ICML 2021 [Kil+21b] Krishnateja Killamsetty et al. “Retrieve: Coreset selection for effi- cient and robust semi-supervised learning”. NeurIPS 2021 [KS22] Yeachan Kim and Bonggun Shin. “In Defense of Core-set: A Density- aware Core-set Selection for Active Learning”. ACM SIGKDD 2022 --- Rebuttal Comment 1.1: Comment: Thanks for your response and clarifying some of the points. I have some follow-up questions/concerns: 1. Is this understanding generally correct? "The mismatch between DP and DQ appears to be relatively low in the experiments. For example, MNIST versus EMNIST; CIFAR 10 versus CIFAR10 (same?); and BBC News versus AGNews. Notably, the DQ datasets are about the same or larger than original datasets." 2. For the datasets where the public dataset is generally a superset of the private dataset (even if there is a slight mismatch in distribution), what would be the performance of a baseline in which the adversary simply trains a model on the public dataset (no querying or active sampling needed), but uses it on the private dataset? It seems to me that could do pretty well on some measures. 3. Why should an MLaaS platform worry? Here is perhaps a cynical argument. If I were an MLaaS platform, it seems a competitor stealing my model using the techniques from the paper shouldn't worry me much -- with the accuracy gap between the stolen model and my model, it seems I would have no reason to worry about my market share or competitive advantage being lost. And, even if you get some idea of the distribution of the private dataset, is it accurate enough for it to really be a source of worry or competitive threat? 4. There is some recent work on corset selection with high pruning rates by Zhang et al. (ICLR 2023) that would be interesting to compare with or adapt, since that is aiming for high performance with very small coresets. Overall though, the writeup needs to draw a better contrast with the general field of coreset selection. A possible angle is to better argue that you are using the teacher (private) model to inform the coreset selection, whereas in the standard coreset world, one uses a model trained on the same dataset to inform the coreset selection. Thanks. --- Reply to Comment 1.1.1: Title: Author response to additional questions Comment: We thank the reviewer for the response to our rebuttal. Here we address your concerns in further detail. Q1. **(a) Choice of DQ:** Only prior knowledge required to choose DQ is the data-type of DP, i.e. whether the target model uses images or texts. Under the mild assumption that the attacker knows the data-type, we conduct experiments with a significant subset of datasets used in public data-based model extraction attacks, e.g. MNIST, CIFAR10 [PMG+17, JSMA19, PGS+20], AGNews [PGS+20]. We have also included datasets, such as ImageNet, EMNIST, BBCNews, which are novel in this context. **(b) Mismatch of DP and DQ:** For Marich, the datasets DP and DQ can be significantly different. For example, we have attacked MNIST-trained model with both EMNIST and CIFAR10. Though MNIST contains handwritten digits, CIFAR10 contains images of aeroplanes, cats, dogs etc., and EMNIST contains handwritten letters. Thus, the data generating distributions and labels are significantly different. We have not attacked a CIFAR10-trained model with CIFAR10 as DQ. Rather, we have attacked a CIFAR10-trained ResNet with ImageNet as DQ. These two datasets are also known to have very different labels and images. Thus, we would respectfully disagree that in our experiments, DP and DQ are close in terms of data distributions. Rather, our experiments show that Marich can handle both data and model mismatch, which is an addendum to the existing model extraction attacks. **(c) Size of DQ:** Here, the 'size' of the dataset DQ is neither a benefit nor a loss as we aim to use small number of queries. It is the task of the attacker to reduce the number of queries for any given DQ. For example, Marich uses only 1.92% of CIFAR10 to attack an MNIST-trained logistic regression model, and 16.58% of ImageNet to attack CIFAR10-trained ResNet. Q2. **Attacks without queries:** We simulate Random Sampling (RS) (Line 300), which is the closest to the proposed approach that does not use any data-adaptive/active sampling. In RS, we uniformly sample input from DQ, send them for querying, and use the labels predicted from the target model to train the extracted model (App. C, page 18). Our experiments show that models extracted by RS are significantly low in performance w.r.t. the models extracted by Marich and other adaptive query selection algorithms (Table 2-4, Appendix D, page 26-27). This shows effectiveness of active sampling based methods than the proposed approach. But we do not understand how we can "extract a target model" without using any information from it. The definition of extraction attacks states that *"The typical setup for a model extraction attack is the one of an API, such as the ones provided by MLaaS platforms, or a model served on an edge device, such as the image models found in many of our smartphones. In both cases, the adversary is able to send inputs to the model and observe the model’s prediction, but they do not have access to the model’s internals."* [(M. Jagielski and N. Papernot)](https://cleverhans-lab.github.io/2020/05/21/model-extraction.html) Thus, even if the proposed attack without queries works under some performance measures, it would be a new type of attack, which is out of the scope of the work on extraction attacks. --- Rebuttal 2: Comment: Hi there, Thanks a lot for helping with the reviewing process at NeurIPS. Could you please interact with the authors to ensure your concerns are sufficiently addressed, or are there fundamental flaws that cannot be fixed? It would be great if you could please make your questions more concrete for the authors to respond to. Thank you. Your AC
Summary: The authors studied black-box model extraction, which is a practical scenario in MLaaS. In order to boost attack efficiency in this extraction, they focused on distributional equivalence and max-information model extraction. As for distributional equivalence, they proposed a distributional notion of equivalence. On the other hand, they maximised the mutual information between the extracted and target models’ distributions. Experiments showed the effectiveness of MARICH. Strengths: + The topic is of practical importance. + The proposed two methods are novel and elegant. Weaknesses: - The writing section of this paper can be further improved and reorganized. The related work can be presented as a separate section in the paper. The results in Table 1 should be presented more clearly, especially the effectiveness of MARICH. - The experiments may not be comprehensive enough. For the field of computer vision, only MNIST and CIFAR-10, which are relatively small datasets, were considered. Since this paper focuses on black-box model extraction in real-world scenarios, I suggest conducting experiments on ImageNet or its subsets. - In the results of Table 1, MARICH does not show overwhelming advantages and even performs worse than entropy sampling on STL/ResNet-18. Can the tradeoff between them be analyzed? Technical Quality: 3 good Clarity: 3 good Questions for Authors: refer to weakness. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: refer to weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer his/her valuable time spent reviewing. **Improved presentation of Table 1:** We would like to refer the reviewer to the full paper with appendix which is provided in the supplementary materials. We apologize for the fact that we had made the rectifications in Table 1 in the full version, which contains the actual representation of our experimental results. **Experiments with ImageNet:** We have actually done experiments with ImageNet/ResNet-18. We have described this in the text of the main paper (line 297, line 321-322) and later added the rectified Table 1 in the extended paper. Please check the extended paper in the supplementary material. The results show applicability of our algorithm to larger datasets. We would also like to attract the reviewer's attention to the fact that the row with STL/ResNet-18 in Table 1 in the first draft is rectified in the extended version. **Trade-off of different performance measures:** We observe that even with the limited query budget (less than most of the related works), we achieve better performance in most the cases, or close enough to that of the ``Best of the Competing" attacks whenever we cannot (Table 1 in extended paper in the supplementary material). It would be interesting to check whether performance of different attacks on different datasets/models can be characterised. This can be an interesting future question and is also related to the instance hardness of privacy attacks as in (Carlini et al., 2022: https://arxiv.org/abs/2112.03570). Since the question under study is not to design a better active learning algorithm but to design an adaptive extraction attack from first information-theoretic principles, different performance measures do not only depend on the query selection algorithm and the dataset but also the information leaked by its output of a model regarding its input. We hope that our response addresses the reviewer’s questions, and will convince him/her to raise the score. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I will consider raising my scores. --- Reply to Comment 1.1.1: Title: Authors' response to reviewer's official comment Comment: Thanks for your consideration of our rebuttal. As the discussion period comes to closure, we are looking forward to your revised score.
Summary: The authors propose a model extraction attack which queries a model's publicly available API and chooses samples based on maximizing entropy of the target model's predictions, and maximizing agreement with between extracted model and target model predictions. These samples are then used in training a surrogate model. Strengths: 1. The paper is very well presented. 2. The authors are very clear and good about motivating their proposed method. 3. The results of the attack are quite impressive when compared to baselines. 4. Experimentation is thorough, including a range of target models and data distributions considered. Weaknesses: 1. While the attack is well motivated, it seems like the major practical contribution of this work is the addition of the entropy term in eq. 7. If so, this seems like a relatively minor contribution. I don't have background in this area, but it seems like any previous work should at least consider the Model-mismatch term in your optimization problem. Is this not the case? Nonetheless, I do think the work offers a principled approach to the problem, and has valuable contributions, so this is a minor weakness in my eyes. Update: I have read the authors' response to my review and my concerns are largely satisfied. I will keep my score as a 7. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Why does the definition for "Distributionally equivalent model extraction (Def. 3.1) measure the divergence between joint distributions? It seems like just considering the divergence between the distribution of the two model model outputs would be sufficient. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors do not include a limitations section in their main body. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for acknowledging the strengths and soundness of the contribution as well as for their comments to improving the manuscript. **Novelty of contributions:** We refer to the general comments for an in-depth discussion. **Formulation of distributionally equivalent model extraction:** A classifier induces a predictive distribution over labels/output for a given input. But it is a local view of the classifier. What we want to replicate here is the global predictive distribution of the classifier, i.e. how it induces a predictive distribution over all possible input which are sampled from a dataset or equivalently, a data-generating distribution. Later, we observe the importance of this formulation as it leads to the ERM-like query selection objective in Eq. (5) and (6) (Line 224, page 5). If we would have considered the KL divergence between only the pointwise predictive distributions, we would not have obtained the $E_Q$ term in Eq. (5) and thus, no natural ERM formulation as in Eq. (6). We hope that our response addresses the reviewer's questions. Let us know if you have any further query.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their valuable time and efforts towards improving the manuscript. In the following, we highlight the novelty of our contributions and width of our evaluation. We then address comments specific to each reviewer by responding to them directly. **Novelty of contributions and width of evaluations:** The contributions of this work are three folds: 1. Rather than aiming for extracting replicating the task accuracy or the functional equivalence (i.e. replicating the model weights), we focus on extracting the predictive distribution of a model given a data generating distribution. We encode this objective in two ways: (a) distributionally equivalent extraction, (b) max-information extraction. Our hypothesis is that if we can extract the predictive distribution of a model that is good enough to replicate other properties of the model (e.g. accuracy) and also to run other attacks (e.g. membership inference). 2. We show that both of the problem formulations (i.e. distributionally equivalent extraction and max-information extraction) lead to a unique variational optimisation problem. This optimisation problem provides us a method to adaptively and sequentially choose queries from another dataset, without accessing any side information about or part of the private training dataset. Previously, researchers have deployed entropy sampling and other active sampling methods to efficiently select queries for attack. But our natural formulation shows that these methods can be further grounded on the objectives of distributionally equivalent extraction and max-information extraction. Thus, though the end result is an active query selection algorithm, the fundamental approach to design it is different and novel with respect to the existing works. 3. We now test our proposed algorithms on multiple types of image (MNIST, CIFAR10, EMNIST, ImageNet etc.) and text (BBCNews, AGNews) data, and different types of models (logistic regression, CNN, ResNet-18, BERT). Experiments demonstrate three things. (a) Our methodological approach to design the adaptive extraction algorithm is query efficient. (b) Our hypothesis that replicating the predictive distribution of a model is enough to replicate its functionality (e.g. accuracy) and use it for MI attacks is valid. (c) Our approach allows us to design a model-oblivious and dataset-oblivious approach to attack as we can extract the true model's predictive distribution with a different model architecture and a mismatched querying dataset. Hope the commentary clarifies the motivation behind the problem and its relevance. We are looking forward to respond if you have further questions.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Accelerate Multi-Agent Reinforcement Learning in Zero-Sum Games with Subgame Curriculum Learning
Reject
Summary: The paper proposes a general subgame curriculum learning framework to accelerate MARL training for zero-sum games. It adopts an adaptive initial state distribution by resetting agents to some previously visited states where they can quickly learn to improve performance. The author derives a subgame selection metric that approximates the squared distance to NE values and further adopt a particle-based state sampler for subgame generation. Strengths: The paper presents a general framework to accelerate NE learning in zero-sum games by training over a curriculum of subgames. The author develops an automatic curriculum learning algorithm, i.e., Subgame Automatic Curriculum Learning, which can adopt any MARL algorithm as its backbone and preserve the overall convergence property. The paper is well written. A motivating example is also described to illustrate the main idea, which is beneficial for readers to understand the work. Weaknesses: 1. As we can see, the convergent speed is accelerated by the proposed method. However, there is no evidence that the final performance is improved. 2. The core part of this work is to compute the weight of the state. It would be better to compare with other metrics. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I mainly have the following concerns. 1.How about the performance improvement brought by the proposed method? 2.Would it be better if other metrics, for example, the least visited times of a state, are used solely or combined with the proposed metric in Section4.2? It would be better the see more comparison results about this. 3.It seems there are mistakes in Eq. (8) and Eq. (9). The index 2 is missing from (7) to (8). How the equation is transformed from (8) to (9). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The author could make a discussion about the application of the proposed method on three or more players. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and valuable comments! We appreciate your acknowledgment of our proposed framework and recognition of the illustrative example. We hope our responses can address your concerns. **Q1: The convergent speed is accelerated by the proposed method. How about the final performance improvement?** In zero-sum games, the optimal solution is the Nash equilibrium (NE) strategy. And according to the Minimax theorem [1], the value of the NE is unique in zero-sum games. Therefore, in theory, we cannot change the final performance of NE strategies in zero-sum games. In practice, it is very hard to learn the exact NE strategies in complex games like GRF. So the performance of different algorithms is often evaluated by using the same amount of training samples and checking the policy's exploitability. In this case, faster convergence means better performance and our results in MPE and GRF show that SACL achieves the lowest exploitability in all algorithms. [1] Von Neumann, John, and Oskar Morgenstern. "Theory of games and economic behavior, 2nd rev." (1947). **Q2: It would be better if other metrics, for example, the least visited times of a state, are used solely or combined with the proposed metric in Section 4.2.** Thank you for your suggestion, we added the least visited metric to compare it with our proposed metric, and the result is shown in Fig. 3(a) of the global response PDF. The four metrics in our ablation study (Section 5.2) is included for comparison: uniform, bias-only, variance-only, and TD error. We also consider combining our metric $w_{\text{SACL}}(\cdot)$ with the least visited metric $w_{\text{lv}}(\cdot)$, i.e., $w(s) = w_{\text{SACL}}(s) + \lambda\cdot w_{\text{lv}}(s)$, and the result is shown in Fig. 3(b) of the global response PDF. As shown in the results, our proposed metric has the best performance and combining with the least visited metric does not improve the result. The least visited metric does not work well in zero-sum games because the importance of a state depends not only on the visited times, but also on the opponents' policy. Even if a state has been visited many times, if the opponents' policy has changed, the agents still need to be trained on the induced subgame. Therefore, it is not a suitable subgame sampling metric in zero-sum games. **Q3: It seems there are mistakes in Eq. (8) and Eq. (9). The index 2 is missing from (7) to (8). How the equation is transformed from (8) to (9).** The derivation of Eq. (8) and Eq. (9) omitted a few steps, so it may be a little confusing but there is no mistake. * From Eq. (7) to Eq. (8), we use the fact that $V_2^*(s) = -V_1^*(s)$ (zero-sum) and $V_1(s)=\tilde{V_1}(s)$, $V_2(s)=-\tilde{V_2}(s)$ (definition), and the detailed derivation is: $$ \begin{aligned} w(s) &= \frac{1}{2} \sum_{i=1}^2 (V_i^*(s) - V_i(s))^2 \newline &= \frac{1}{2} (V_1^*(s) - \tilde{V}_1(s))^2 + \frac{1}{2} (-V_1^*(s) + \tilde{V}_2(s))^2 \newline &=\mathbb{E}_i[(V_1^*(s) - \tilde{V}_i(s))^2]. \end{aligned} $$ * From Eq. (8) to Eq. (9), we use the fact that $\mathbb{E}[A^2] = \mathbb{E}[A]^2 + \text{Var}[A]$. --- We genuinely value your dedication to reviewing our paper and believe our detailed responses have addressed your concerns. We would really appreciate it if you could consider raising the rating of our work based on our responses. --- Rebuttal Comment 1.1: Title: Thanks for your explanation. Comment: Thanks for your explanation.
Summary: The paper proposes a novel subgame curriculum learning framework for accelerating multi-agent reinforcement learning (MARL) in zero-sum Markov games. The framework uses an adaptive initial state distribution to induce subgames of varying difficulty for agents to learn, and leverages a sampling metric that approximates the squared distance to Nash equilibrium (NE) values to prioritize subgames with fast value change and high uncertainty. The paper instantiates the framework with a particle-based state sampler and integrates it with any MARL algorithm, resulting in the Subgame Automatic Curriculum Learning (SACL) algorithm. The paper evaluates SACL in three zero-sum environments and show that it converges faster and achieves lower exploitability than existing methods. Strengths: - The paper addresses an important and challenging problem of reducing the computational cost of solving complex zero-sum games with MARL, which has many potential applications and implications. - The paper provides a illustrative motivating example to justify the effectiveness of the subgame curriculum learning framework, which leverages ideas from goal-conditioned RL and prioritized experience replay. - The paper is able to provide practical methods to approximately solve the hard parts of the theoretical work. For example, the paper introduces a novel subgame sampling metric that approximates the squared distance to NE values with a bias term and a variance term. The paper also adopts a particle-based state sampler that is compatible with most MARL algorithms. - The paper conducts extensive experiments on three different zero-sum environments and demonstrates that SACL can converge to NE policies with substantially fewer samples. The ablation study part is able to demonstrate how well the approximation methods in SACL work comparing to other alternatives. Weaknesses: - The paper assumes that the environment can be reset to any desired state to generate induced subgames, which may not be feasible or realistic in some settings. - In Section 5.1, it is not clear how the approximate exploitability is computed using MARL methods for best response training. What are the exact algorithms used for this purpose? How many samples are used for each best response training? How reliable are these estimates? Some details on these aspects would help evaluate the results more fairly. - In Section 5.2, it would be helpful to report some quantitative results on how different choices of state buffer size, hyperparameters alpha, subgame sample probability, or other hyperparameters affect SACL's performance or convergence. - It would also be helpful to have more details on how to implement FPS for buffer update and how to measure the distance between states. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - In the subgame sampling metric, the difference between two consecutive value function checkpoints is used to approximate the difference between the NE value and the current value. What if the value function goes into a bad local minimum and stops moving? Has this been observed in any experiment? - In Section 5.2, the paper reports that the subgame sampling metric works better than the TD error in practice. Since the value function is often updated based on TD error between each iteration, what is the intuition of the subgame sampling metric being better? - How does SACL handle cases where there are multiple NEs in a game? Does it converge to one randomly? - How generalizable is SACL to other types of games such as cooperative games or general-sum games? What modifications or extensions are needed to adapt SACL to these settings? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: - The paper requires access to full state information and reset function for each environment. - The paper relies on approximate exploitability as a proxy for measuring closeness to NE policies. However, approximate exploitability may not reflect the true performance gap between different policies due to sample variance or approximation errors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your appreciation of our work and thoughtful comments! We are encouraged to see your acknowledgment of our work’s novelty and the positive assessment of our experiment results. We hope our responses can address your concerns. **Q1: The assumption that environments can be reset to any state may not be feasible or realistic in some settings.** We agree that some environments don't natively support this assumption. But this feature is easy to implement by modifying the environment’s reset function. In fact, the MPE environment in our experiment doesn’t satisfy this requirement, but a little change to its `reset_world()` function makes it works with our framework. We would also like to remark that this assumption is very common in the curriculum learning literature and is widely used in works like [1, 2], where setting tasks is equivalent to resetting initial states in our case. [1] Florensa, Carlos, et al. "Automatic goal generation for reinforcement learning agents." ICML, 2018. [3] Portelas, Rémy, et al. "Teacher algorithms for curriculum learning of deep rl in continuously parameterized environments." CoRL, 2020. **Q2: How is the approximate exploitability computed using MARL methods?** The implementation details are included in Appendix B.2 of the supplementary material. We will add more information to the main text in the next version. We use MAPPO to train an approximate best response (BR). In principle, the BR is trained until convergence. A BR is trained for 200M samples in MPE and 400M in GRF. We ensure our results are reliable in the following ways: 1. we use much more samples to train the BR than we used to train NE policies (200M >> 40M, 400M > 50M/100M) to ensure the BRs are fully converged. 2. We train individual BR for different seeds and report the mean and std. A single curve in the exploitability figures requires $9 \times 3 = 27$ (checkpoint $\times$ seeds) trained BRs. 3. We render the behaviors of different algorithms' policies to validate that SACL does learn a stronger strategy. The gifs can be found on our [website](https://sites.google.com/view/sacl-neurips). **Q3: More quantitative results on how different hyperparameters affect SACL's performance.** All these ablation studies are included in Appendix C.4 and shown in Fig. 20. Please see the detailed discussion in the appendix. **Q4: More details on how to implement FPS for buffer updates and how to measure the distance between states.** In general, FPS iteratively selects the farthest point from the current set of points. The distance between two states is simply the Euclidean distance. The distance between a state $s_a$ and a set of states $S$ is the smallest distance between $s_a$ and any state in $S$, i.e., $\text{min}_{s\in S} \|s_a - s\|$. For implementation, we first normalize each dimension of the state vector to make all values lie in the range $[0, 1]$. Then we directly use the `farthest_point_sampler()` function from the [Deep Graph Library](https://docs.dgl.ai/en/latest/api/python/dgl.geometry.html#farthest-point-sampler) to utilize GPUs for fast and stable results. **Q5: In the subgame sampling metric, what if the value function goes into a bad local minimum and stops moving?** If we only consider the bias term in the metric, falling into a local minimum would indeed make the metric fail. Fortunately, we also include a variance term of several value functions initialized with different seeds and trained with different samples. If one value function goes into a bad local minimum, it will make the variance large, and further make the weight large so that the policies are trained more on this unsolved subgame. **Q6: What is the intuition of the subgame sampling metric being better than the TD error metric?** One reason is that the TD error metric only measures the change of value functions, which is similar to the bias term in our metric. We have an additional variance term to consider the uncertainty and thus get better performance. Another reason is that TD error is less stable because it always uses the latest value. While the bias term in our metric is computed using two consecutive checkpoints, which are usually 5-10 iterations apart and more stable. **Q7: How does SACL handle cases where there are multiple NEs in a game?** In zero-sum games, although there may be multiple NE strategies, their NE values are the same [4]. This means that any NE strategy is optimal. Therefore, we only aim to learn an NE but don't consider which NE. In practice, since SACL reserves the convergence property of the backbone MARL algorithm, it is the backbone algorithm rather than SACL that determines which NE it converges to, and SACL only makes this process faster. [4] Von Neumann, John, and Oskar Morgenstern. "Theory of games and economic behavior, 2nd rev." (1947). **Q8: How generalizable is SACL to other types of games such as cooperative or general-sum games?** SACL consists of three components: the subgame CL framework, the sampling metric, and the particle-based sampler. The framework and the sampler can be applied to cooperative and general-sum games because they don't require the zero-sum property. The only part to change is the metric. * For cooperative games, there are clear progression metrics like the success rate, so we can directly use these metrics as the sampling weight. In Appendix C.3, SACL achieves comparable results with one of the strongest CL algorithms for a cooperative game. * General-sum games are more complicated, and there is no clear metric to measure the subgames’ learning progress to the best of our knowledge. A possible way is to still start from Eq. (7) to derive a metric. It would be an interesting extension of SACL and we leave it for future work. --- We extend our sincere gratitude for your feedback and hope our answers have addressed your concerns. Your support is invaluable to us and we genuinely hope that our efforts merit a raise in your rating. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I want to thank the authors for their response. I find it helpful to address my questions. I would like to keep my original scores at this point.
Summary: The paper proposes a subgame curriculum learning framework to accelerate multi-agent reinforcement learning (MARL) training for zero-sum games. The framework adopts an adaptive initial state distribution by resetting agents to some previously visited states where they can quickly learn to improve performance. The paper derives a subgame selection metric that approximates the squared distance to Nash equilibrium (NE) values and further adopts a particle-based state sampler for subgame generation. Experiments in the particle-world environment, Google Research Football environment, and hide-and-seek show that SACL produces much stronger policies than baselines. Strengths: - The paper provides a detailed analysis of the proposed approach and justifies it through experiments and analysis of a simple iterated Rock-Paper-Scissors game. - The paper provides a clear and concise explanation of the proposed approach and its implementation. - SACL is shown to produce much stronger policies than baselines in experiments conducted in the particle-world environment and Google Research Football environment. Weaknesses: - There is a gap between the motivated iterated Rock-Paper-Scissors game and the experiments conducted in the particle-world environment and Google Research Football. Since state and action space is discrete and finite in RPS, while these spaces are continuous in MPE and GRF. Also, The state is not communicative in RPS while it is communicative in MPE or GRF. In one rollout, two states are communicative if and only if each is reachable from the other with nonzero probability in a finite number of steps. So it is ok to use samples in RPS since it is a trade of time and space. While the state in MPE or GRF is something like position or speed, the advantage shown in the motivated scenario is hard to extend to more realistic environments. - The link between eq.9 and eq.10 is weak, which should be the key to the theoretical contribution of the paper. I have several concerns about eq.10. - The estimated value function at different checkpoints represents different policies' values. Do the authors consider the missing $\pi$ inside the value function? - Does the asymptotic convergence of the estimated value function of a few sampled states mean the policies converge to NE? The explanation in Appendix is not theoretical and less convincing by giving examples. - The subgame sampler is also essential, why the farthest point should be sampled first? - Go is a good baseline for SACL. Since it is a fair game. To my understanding, SACL can be seen as an alternative to MCTS. One can start from the arbitrary state of a Go board, rollout, and then train the policy and value function. From this point of view, only having one rollout in Alg.2 might cause a large variance in the value function. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See Weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The paper does not explicitly mention any limitations of the proposed method. The paper has no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback! We are heartened to see your recognition of our detailed analysis and strong experiment results. For your constructive questions, we hope the following response can address your concerns. **Q1: There is a gap between the iterated RPS game and the experiments in MPE and GRF, the advantage shown in the motivating example is hard to extend to more realistic environments.** We agree with the reviewer that the type of state/action space and the communicative property is different between the iterated RPS game and the MPE/GRF environment. However, the advantage of SACL is conceptually the same: starting from easier subgames and gradually moving to harder ones can solve the full game more efficiently. Use the MPE hard scenario as an example. Since the predators are in the top-right corner and the prey is in the bottom-left corner, the full game is hard to solve. A suitable order of subgames to learn would be to start from the easiest subgame where predators and prey are all near the center, and gradually move to harder subgames where they are at the edges or corners. This is conceptually the same as the case in the iterated RPS game where we learn from the last round to the first round. We also visualize the change of the prey’s initial position heatmap produced by SACL in the MPE hard scenario and it indeed starts from the center and moves to the edges and corners. Please see Fig. 1 in the global response PDF. **Q2: The estimated value function at different checkpoints represents different policies' values. Do the authors consider the missing $\pi$ inside the value function?** Yes, the value function $\tilde{V}_i^{(t)}$ represents the value of the current policy $\pi_i^{(t)}$, and a more clear way to write it would be $\tilde{V}_i^{\pi_i^{(t)}}$. We didn't explicitly write the policy $\pi$ in the value function for ease of notation. We will clarify this in our revised paper and we are sorry for the confusion caused. **Q3: Does the asymptotic convergence of the estimated value function of a few sampled states mean the policies converge to NE?** There might be some misunderstanding about the usage of Eq. (10). We are not using it as a criterion for NE convergence, but using it to prioritize subgames where the current policy is most likely to make improvement. If the weight $\tilde{w}(s)$ in Eq. (10) is small, it does not necessarily mean that the policy has converged in the subgame $MG(s)$. It is also possible that the subgame is too hard for the current policy and it is not making any progress. But in both cases, the induced subgame $MG(s)$ is not suitable for the current policy to learn and we should sample it less, which is exactly what a small weight will do. **Q4: Why the farthest point should be sampled first in the subgame sampler?** In the subgame sampler, we use a state buffer to approximate the whole state space and record the state weights. In principle, the states in the buffer should span the entire state space and distribute uniformly, but the rollout data is usually concentrated and very similar to each other. Therefore, we need to select states that are sufficiently far from each other to ensure good coverage of the state space. More formally, we want to select a subset $S'$ of size $K$ from the full set $S$, so that the sum of the shortest distances between states in subset $S'$ is maximized, i.e., $\max_{S'\subset S, |S'|=K}\sum_{s\in S'} \min_{s'\in S'}\|s-s'\|$. And farthest point sampling (FPS) is a greedy algorithm that efficiently finds an approximate optimal solution to this problem. The ablation study in Section 5.2 and Fig. 5 (d) also validates that FPS achieves the best result. We further visualize the state distribution in the buffer generated by different update methods in Fig. 2 of the global response PDF. Fig. 2(a-c) show the heatmaps of the predators’ position. Fig. 2(d-f) run PCA on the full state space and show the projection of the states in the buffer to the two-dimensional space. The results shwo that if we randomly select states or greedily select states with high weights, the states in the buffer can become very concentrated and can't approximate the whole state space. **Q5: Only having one rollout in Alg. 2 might cause a large variance in the value function.** In the actual implementation, we run hundreds or thousands of rollouts and train on a batch of collected data. We will make the following changes to Alg. 2 to make it clear: >... > > For each parallel environment: > >     Sample $s^0 \sim \text{sampler}(M)$ with probability $p$, else $s^0\sim\rho(\cdot)$; > >     Rollout in $MG(s^0)$ and collect samples; > > Train $\{\pi_i, V_i\}_{i=1}^2$ on the samples via MARL; > > ... --- We would like to express our appreciation for your constructive review. We have carefully addressed each of your concerns and believe that our responses demonstrate the value of our proposed approach. We kindly request you reconsider the rating for our submission and genuinely hope our efforts will warrant a higher evaluation.
Summary: This paper presents an algorithm (SACL) for accelerating MARL training in zero-sum Markov games based on the subgame curriculum learning framework. A sampling metric based on approximated squared distance to NE and a particle-based sampler are proposed to sample states for subgame generation. Experiment results show the superiority of SACL in producing stronger policies and boosting training efficiency. Strengths: 1. The paper is well-written and easy to follow. The general idea of the algorithm is well illustrated via a toy example: the iterated RPS games. 2. The experiment results show both the efficiency and effectivity of SACL. Weaknesses: * I am not sure the game considered in this paper is perfect-information or imperfect-information. From the Rock-Paper-Scissor example and the baseline methods considered (NeuRD and PSRO), I am aussming the case of imperfect-information. If it is the case of imperfect-information, the optimal policy for a subgame depends on the distribution of hidden information. Yet, I don't see any component of SACL that deals with this distribution. For instance, the DeepStack (a poker AI) algorithm uses the agent's range and opponent counterfactual values when resolving a subgame in poker. * Also, the choice of the metric and oracle sampler appears ad-hoc to me. 1. Why can we use Equ 10 to approximate Equ 9? Is this estimator unbiased or with low variance? 2. In Equ 10, is the algorithm stable for different choices of \alpha? It would be good to see some discussion or ablation study on it. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: How to define the distance between two points when using FPS to select the points? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 4 excellent Contribution: 2 fair Limitations: Please see weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comments and questions! We are encouraged to see your positive assessment of SACL’s efficiency and effectiveness. And we hope the following responses can address your concerns. **Q1: Is the game considered in this paper perfect-information or imperfect-information?** In this paper, we use the formulation of Markov games, which is different from the formulation of extensive games with perfect-information or imperfect-information. SACL can be directly used in fully-observable Markov games where the states contain all information about the game and are observable to all agents. For partially-observable Markov games, though some of the information is hidden from the agents, the states still contain all information of the game and it is also possible to run SACL in these games. As the reviewer said, an important part is to deal with the distribution of hidden information. A way to do that is to replace states in prioritized sampling with infosets, i.e., sets of states that are indistinguishable to agents, and maintain the distribution of states within each infoset. In each episode, we first use prioritized sampling to select an infoset and then sample a state from the infoset to generate the subgame. In this way, we keep the distribution of hidden information and also build a subgame curriculum to accelerate training. **Q2: Why can we use Eq. (10) to approximate Eq. (9)?** We put the discussion about Eq. (9) and (10) in Appendix A.3 due to the limited space in the main text. In short, Eq. (10) does not directly approximate Eq. (9) but estimates the squared distance to the next local optimal value. As training continues, the final optimal value is the NE value and Eq. (10) approximates Eq. (9). Consider a simple case where the value function changes monotonically throughout training, as shown in Fig. 6 (in Appendix A.3), the first term in Eq. (10) can be regarded as a first-order approximation of the bias term in Eq. (9). However, in zero-sum games, the value function may oscillate up and down in different stages (like in hide-and-seek). In this case, Eq. (10) becomes the approximated squared difference between the current value and the next local optimal value, which is derived in Eq. (12) and shown in Fig. 7. Therefore, by using the weight in Eq. (10), we are not directly prioritizing states whose values are far from the NE value, but states whose values are far from the local optimal value of the current stage. And by accelerating the learning in each stage, we make the NE learning process more efficient in total. **Q3: Is the algorithm stable for different choices of $\alpha$ in Eq. (10)?** Yes, SACL is stable for different $\alpha$. The ablation study on $\alpha$ can be found in Appendix C.4 and Fig. 20(d). We test different values from {$0.3, 0.7, 1.0, 3.0$} on the MPE hard scenario and they achieve comparable results. This shows that our algorithm is not sensitive to the hyperparameter $\alpha$. **Q4: How to define the distance between two points when using FPS to select the points?** Because different dimensions of the state vector (or point) may have different value ranges and affect the distance calculation, we first normalize each dimension of the state vector so that their ranges are all $[0, 1]$. Then the distance between two states is simply the Euclidean distance. The distance between a state $s_a$ and a set of states $S$ is the smallest distance between $s_a$ and any state in $S$, i.e., $\text{min}_{s\in S} \|s_a - s\|$. --- We greatly appreciate your thorough review of our work and hope our answers effectively address your concerns. We would be very grateful if you could re-evaluate our paper based on the responses and consider raising the rating for our submission. --- Rebuttal Comment 1.1: Comment: Thanks very much for your rebuttal. After reading your response to **Q1**, I am still deeply concerned with the way SACL constructs a subgame for imperfect-information games. - Subgame solving done for imperfect-information games is fundamentally different from that for perfect-information games [1-4]. - For imperfect-information games (e.g., poker), there has been extensive literature on subgame solving (how to construct&solve a subgame) . The paper barely touches these methods, only citing [1] without any discussion. Given the existing literature on how subgame solving is currently being done [1-4], the subgame generation, sampling, and solving in SACL **look rather ad hoc to me**. Without any theoretical result or experimental comparison to any of [1-4] on `default' benchmarks such as poker, I am not convinced that the subgame curriculum learning method presented in this paper is a solid contribution to the general subgame solving research community. [1] Brown, Noam, and Tuomas Sandholm. "Safe and nested subgame solving for imperfect-information games." Advances in neural information processing systems 30 (2017). [2] Zhang, Brian, and Tuomas Sandholm. "Subgame solving without common knowledge." Advances in Neural Information Processing Systems 34 (2021): 23993-24004. [3] Burch, Neil, Michael Johanson, and Michael Bowling. "Solving imperfect information games using decomposition." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 28. No. 1. 2014. [4] Moravcik, Matej, et al. "Refining subgames in large imperfect information games." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 30. No. 1. 2016. ... --- Reply to Comment 1.1.1: Title: Reply to Reviewer dzKo Comment: Thank you for your comments. We would like to clarify some aspects of our work that may not have been fully grasped. In this work, we use SACL to accelerate training in fully-observable Markov games. Please note that Markov games (where the agents make decisions simultaneously in each step) and extensive-form games are different formulations, and a fully-observable Markov game can be an imperfect-information extensive-form game, e.g., the iterated RPS game in Section 3. The subgame of a fully-observable Markov game can be solved by standard MARL methods like Minimax-Q. In the previous rebuttal, we discussed a potential way to extend SACL to partially-observable Markov games, and we agree with the reviewer that it could be further improved by methods like subgame solving. However, it is out of the scope of this paper and we leave this extension of SACL for future work. In addition, we would like to respectfully point out that subgame (re)solving techniques in [1-4] are fundamentally different from ours. The idea of subgame solving is to first get a blueprint strategy of the abstracted game and use it to play the original game. As the game progresses and the remaining game becomes tractable, the specific subgame is solved in real-time to create a combined final policy. Subgame solving is **applied online during the game** when extra time and computing resources are available and considers **“how to solve the subgame”**. By contrast, SACL is used to learn a policy **before playing the game** and accelerate the training by generating an appropriate order of subgames to learn, i.e., we focus on **“which subgame to solve”**. These two methods exhibit distinct differences, and we have full confidence that SACL makes a solid and valuable contribution to the community. Your suggestion to add more discussions on subgame solving is well-received and we will make sure the clarification and additional references are included in our revised paper. We are committed to making the necessary changes to clarify all aspects. We hope these explanations and clarifications can help you have a more thorough understanding and a better evaluation of our paper.
Rebuttal 1: Rebuttal: We would like to thank all reviewers for taking the time to review our submission and providing constructive comments on our work. We are heartened by the consensus among reviewers about the strengths of our work, which align with our intentions and efforts: 1. **Novelty and significance:** We appreciate that all reviewers agree our proposed approach is novel and substantially accelerates NE learning in zero-sum games. It's encouraging to see the comment by reviewer 9JJp that "the paper addresses an important and challenging problem of reducing the computational cost of solving complex zero-sum games with MARL, which has many potential applications and implications." 2. **Thorough evaluation:** We are pleased to see the reviewers' acknowledgment of our comprehensive experiments and evaluations in three different games. As mentioned by reviewer dzKo, "The experiment results show both the efficiency and effectivity of SACL." 3. **Clarity and presentation:** We are gratified to see that our efforts to present our ideas clearly have been well-received by all reviewers. Many reviewers mentioned the illustrating example and reviewer z7mX said "The paper provides a detailed analysis of the proposed approach and justifies it through experiments and analysis of a simple iterated Rock-Paper-Scissors game." In response to the specific concerns and suggestions raised by each reviewer, we provide a detailed discussion for each of your reviews. We have also prepared a PDF that contains the additional experiment results. We believe these figures will provide a more comprehensive visualization of our results and further reinforce the validity of our work. Best regards, Submission 6466 Authors Pdf: /pdf/84bdc777edb508b42356b655f05ba8767400a92e.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a framework for learning Nash equilibria in zero-sum Markov games based on subgame curriculum learning. Novel sampling metrics for subgames generation are proposed. The proposed SACL algorithm is able to achieve equal performance at lower sample complexity compared with self-play algorithms. Strengths: - Proposed a novel framework for learning NE in zero-sum games. This can be combined with different RL algorithms to produce learning mechanisms with lower sample complexity. - The presentation of the paper is clear. Weaknesses: - While the baseline methods require an exponential complexity, the proposed SACL costs approximately half, which is a limited practical improvement especially considering real-world applications as the authors suggested. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Are there potential ways to enlarge the complexity improvement? This could significantly improve the impact of the proposed method. I am not familiar with this field. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: A more thorough discussion of the limitations and future steps could be beneficial. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and feedback! We appreciate your recognition of the novelty of our work and the clarity of our presentation. In response to your questions, we provide the following explanation. **Q1: While the baseline methods require an exponential complexity, the proposed SACL costs approximately half, which is a limited practical improvement especially considering real-world applications as the authors suggested.** We would like to clarify that SACL substantially accelerates the learning process instead of just costing half the samples as shown in both the motivating example and in our experiments. More concretely, in the motivating example named iterated RPS game, we show both theoretically and empirically that SACL greatly reduces the sample complexity from exponential to linear. In our experiments, SACL learns much stronger policies with the lowest exploitability in both MPE and GRF. The learning curve shows that SACL achieves this with much less sample complexity. Take the MPE hard results in Fig. 4(b) for an example, SACL converges to an approximate NE with about $1.5 \times 1e3$ exploitability using 40M samples. We trained SP for 400M samples and it still couldn't reach the same exploitability, which shows that SACL is at least $10$ times faster than the best baseline. In the complex HnS environment, as the reviewer mentioned, SACL uses about half the samples. This is NOT a trivial improvement in practice considering the long training time. Using a single A100 GPU, it takes about $10$ days to train the policy with MAPPO. With half the samples, SACL only uses about $5-6$ days. For more complex real-world applications like OpenAI Five which is trained for $10$ months, half the samples could save a substantial amount of time. To support our argument, we would like to quote a few lines from the other reviewers and hope they can help the reviewer have a more comprehensive understanding of our work. * Reviewer 9JJp: “The paper addresses an important and challenging problem of reducing the computational cost of solving complex zero-sum games with MARL, which has many potential applications and implications.” * Reviewer z7mX: “SACL is shown to produce much stronger policies than baselines in experiments conducted in the particle-world environment and Google Research Football environment.” * Reviewer dzKo: “Experiment results show the superiority of SACL in producing stronger policies and boosting training efficiency.” **Q2: Are there potential ways to enlarge the complexity improvement? This could significantly improve the impact of the proposed method.** As discussed in the previous question, our analysis and experiments show SACL already greatly reduces the complexity and accelerates NE learning in zero-sum games. We believe SACL can be helpful to make MARL training in complex zero-sum games more affordable to the community. Furthermore, since SACL is a general framework that does not require domain knowledge of the game, it is possible to further accelerate training by incorporating domain-specific design. For example, the subgame complexity in MPE is determined largely by the position of agents and is less affected by the location of obstacles. So when we use FPS to select representative states as described in Section 4.3, we can put more weight on the dimension for agents' position and less on the landmarks'. This domain-specific technique can further accelerate NE learning in MPE. --- We genuinely value your feedback and hope our answers can help you better evaluate the contribution of our work. We kindly request your reconsideration of the paper’s rating based on our responses and hope to receive your support for its acceptance. --- Rebuttal Comment 1.1: Title: Thank you Comment: I thank the authors for their detailed response, which has helped me better appreciate the paper's contributions. I am raising the score to 5.
null
null
null
null
null
null
A case for reframing automated medical image classification as segmentation
Accept (poster)
Summary: This paper explores the benefits and drawbacks of using segmentation-based methods for classification tasks, particularly in medical imaging. This approach, known as segmentation-for-classification, has been shown to outperform traditional classification models, especially when the available dataset is small or when the classes are imbalanced. It can also handle rare subtypes better than conventional models. Strengths: - Improve aggregate performance: by enhancing the divergence between segmentation's positive and negative classes and increasing the quantity of annotation, segmentation-for-classification provides greater performance differences in situations with limited data, such as in small datasets, scenarios with low class prevalence, and rare subtypes. - Reduced susceptibility to spurious correlations: Segmentation-for-classification is generally more robust to background features that are spuriously correlated with the target task, thus making it more reliable in classification tasks. - Location information: Segmentation-for-classification inherently delivers location information, facilitating human assessment of the model's predictions. This location information is particularly useful in medical imaging, where abnormalities can be challenging to find. - Comprehensive experiment: It is impressive to see the breadth of datasets used, covering synthetic data as well as three distinct medical datasets. The thorough exploration of different training regimes, including fully labeled, semi-supervised, and a boosted semi-supervised approach, greatly contributes to the depth and robustness of this investigation. Weaknesses: - Lack of Baseline Comparisons: While the study establishes that segmentation-for-classification performs better than traditional classification in certain scenarios, it does not benchmark these findings against other state-of-the-art methods for handling imbalanced datasets or for performing classification with limited data. Without these comparisons, it's hard to evaluate the true value of the proposed approach. - Handling of Spurious Correlations: While the authors claim that segmentation-for-classification is less susceptible to spurious correlations, the empirical evidence provided seems limited. The specific mechanisms through which the proposed method avoids or mitigates these correlations could be elaborated on more clearly. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please address the questions mentioned in weakness. Also, I am wondering how the segmentation-for-classification performs in 3D segmentation scenario. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Add more baseline comparison. Specific mechanism for mitigating spurious correlations. 3d data performance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your suggestions and questions in your review. We were glad to hear the reviewer appreciated the many potential benefits of segmentation-for-classification Below, we answer your questions point-by-point. We would be glad for any additional discussion with or suggestions by the reviewer. *Q1: Lack of Baseline Comparisons: While the study establishes that segmentation-for-classification performs better than traditional classification in certain scenarios, it does not benchmark these findings against other state-of-the-art methods for handling imbalanced datasets or for performing classification with limited data. Without these comparisons, it's hard to evaluate the true value of the proposed approach.* We do use straightforward classification and segmentation networks in the original submission, as we were interested in exploring fundamental differences between the two task framings. We observed that segmentation-for-classification is able to achieve higher performance in the limited data regime (including with imbalanced datasets), improved robustness to spurious correlations, and other benefits without any sophisticated methods—simply by using a segmentation model, one is able to attain these benefits. We agree with the reviewer that additional classification baselines for these settings exist, but we note that they are different for different objectives (e.g., methods for reducing reliance on spurious correlations are different from methods that handle class imbalance are different from methods that improve interpretability). We believe the fact that segmentation handles all of these applications without specialized tooling is a major strength of the proposed approach. To address the reviewer’s concern, we are happy to include additional references and discussion of prior work in classification aimed at addressing these applications, for example [1, 2, 3]. Finally, we note that the SPINE dataset is balanced and we still observe improvements using segmentation-for-classification. *Q2: Handling of Spurious Correlations: While the authors claim that segmentation-for-classification is less susceptible to spurious correlations, the empirical evidence provided seems limited. The specific mechanisms through which the proposed method avoids or mitigates these correlations could be elaborated on more clearly.* We will add the mathematical statement and support (with proper reference to prior work) of why we expect segmentation-for-classification to be more robust to spurious correlations to the Appendix in Section A3, along with additional intuition and discussion elaborating on this point. Intuitively, spurious features carry less information about segmentation labels than image-level labels. Empirically, in the original submission we observed improved robustness with the CANDID dataset, where we saw performance on a difficult subset of patients improve from 0.58 to 0.84 AUROC. We also swept the strength of the spurious correlation in Figure 4(f) with the synthetic dataset, which showed classification’s performance dropoff with the increasing spurious correlation strength. *Q3: Also, I am wondering how the segmentation-for-classification performs in 3D segmentation scenario.* The SPINE dataset is a 3D dataset, for which we use 3D convolutional networks. We see similar trends as we saw in 2D. We note there isn’t anything in our analysis or method that is specific for 2D segmentation, supporting the observed results that hold in both 2D and 3D. Additionally, during this rebuttal period we have generated new results for a natural image dataset and for multiclass datasets. Both of these new applications worked out-of-the-box with the method described in the paper, suggesting the method and findings extend to additional datasets and settings the reviewer may be interested in; we provide details in the global response to reviewers. [1] Sohoni, N., Dunnmon, J., Angus, G., Gu, A., & Ré, C. (2020). No subclass left behind: Fine-grained robustness in coarse-grained classification problems. Advances in Neural Information Processing Systems, 33, 19339-19352. [2] Liu, E., Haghgoo, B., Chen, A., Raghunathan, A., Wei, P., Sagawa, Shiori., Liang, Percy., Finn, Chelsea. (2021.) "Just train twice: Improving group robustness without training group information", Proc. Int. Conf. Mach. Learn., pp. 6781-6792. [3] Azizi, S., Mustafa, B., Ryan, F., Beaver, Z., Freyberg, J., Deaton, J., ... & Norouzi, M. (2021). Big self-supervised models advance medical image classification. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 3478-3488). --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal and my concerns have been addressed. I would like to keep my rating --- Reply to Comment 1.1.1: Title: Final comment to reviewer GK2b Comment: Thank you for your time reviewing and for your suggestions, we are glad we were able to answer your questions.
Summary: This paper provides an intriguing and somewhat disruptive approach to medical image classification tasks. It implies that due to advancements in weakly-supervised, self-supervised, and semi-supervised segmentation techniques, the historical inclination towards image classification due to ease of training and label acquisition might be reconsidered. The authors argue for a shift towards segmentation, traditionally a more complex task, and demonstrate this by implementing "segmentation-for-classification" models. Using three retrospective datasets, they show these models outperforming traditional classification techniques in sample efficiency, performance on low-prevalence classes and rare subgroups, robustness to spurious correlations, and interpretability. However, the authors limit their consideration to tasks where the class of interest can be localized. Strengths: * The paper presents an innovative idea that can potentially change the current paradigm. If the results withstand, this reviewer sees a strong argument in saying that all existing segmentation models can easily be turned into powerful classification models by adding a small g() function at the end. * The simplicity of the language used in the paper aids understanding. * Figure 2 does an excellent job of visualizing the difference between classification and segmentation problems in the medical domain. Weaknesses: 1. The most important issue this reviewer has discovered is how readily the authors dismissed the cost of providing pixel-level annotation. The difference between pixel-level and image-level (i.e. classes) annotation can be orders of magnitude. For example, in the study, this reviewer has been part of, an expert spent 279 s on average to produce pixel-level annotation for a single image, while it took only 2 s to generate a class label. Therefore, even a need to produce pixel-level annotations for even an absolute minority of images may be devastating. So far, from the literature and experience, it seems that such data is still needed, albeit in smaller quantities. 2. It is also important to point out a worry related to the study's design. In this work, the performance of augmented segmentation models is compared to the classification models, but no apparent effort is taken to ensure that the results obtained are not due to segmentation models being more powerful (which is often the case). Even the simplest segmentation architecture with the most naïve thresholding may have more parameters than the corresponding classification model. This is important to comment on in the main paper. 3. This brings another important observation - a lot of critical content, such as related works and algorithm details for summarising the probability maps, how multi-layer networks were trained and what architectures were used, the results for 4.2.2 etc., are relegated to the appendix, while arguably less crucial content occupies the main body of the text. This hampers the logical flow and comprehensiveness of the paper. For example, the discussion on the inherent differences between segmentation and classification, though interesting, feels redundant as it seems obvious that different methods would yield different results. This space could be better utilized to expand on related work and other methodological bits mentioned above. Section 5 can be greatly compressed, as it seems repetitive and less important. 4. Most figure captions are insufficiently informative, requiring the reader to refer back to the main text. Figure captions should be independent of the rest of the text. 5. The presentation of the results can be expended, with key information (e.g., number of true positives and false negatives) at this point missing. 6. The last two sections (4 and 5) feel more rushed and less systematic, reducing the overall readability of the paper. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. How do the authors justify the much higher cost and time commitment associated with pixel-level annotation as opposed to class-level annotation? Given the substantial difference, could the authors provide insights on the practicality of their approach in real-world scenarios? 2. Is it possible that the improved performance of the segmentation models over the classification models is simply due to them being inherently more powerful, possibly because they have more parameters? Could the authors elaborate on this aspect and discuss how they ensured a fair comparison between the two models? 3. Could the authors elaborate on the decision to relegate critical content to the appendix? Would it not improve the logical flow and comprehensiveness of the paper if key aspects such as related works, algorithm details, and results were included in the main text? Conversely, could parts of the discussion on the differences between segmentation and classification be moved to the appendix to make space for the aforementioned content in the main text? 4. Have the authors considered re-running their results multiple times to ensure they are reproducible? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors acknowledge their consideration only for classification/segmentation tasks where the class of interest can be localized. They do not address situations where the entire image must be evaluated, which is a significant limitation. Additionally, while the authors advocate for segmentation, they fail to adequately address the issue of pixel-level annotation's increased complexity relative to image-level annotation. The paper does not discuss any potential negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful read of our paper and many suggestions, which strengthened our submission. Below we respond to your points and describe how we updated our manuscript in response. We would be happy for further discussion or to answer any additional questions. *Q1: How do the authors justify the much higher cost…associated with pixel-level annotation?* We agree with you, per-image annotation cost is a major difference between classification and segmentation. Our goal with this paper is to better flesh out the decision space between segmentation vs. classification—it is not only the annotation cost that is different, but fundamental network behavior is different. As we show, segmentation-for-classification can lead to more performant, robust, and interpretable models. In clinical settings, one may prefer more performant or reliable models despite the added annotation cost. We do explore ways to ameliorate the annotation cost. We show in Section 4.2.2 that by using semi-supervision methods, we can use unlabeled data and <10% of the training data labeled with segmentation masks to achieve the performance of a model trained with 100% of the data labeled with classification labels. Further, we believe the landscape of image labeling is changing. In the last six months, the Segment Anything Model [1], SEER [2], and medical-imaging-specific extensions [3, 4] have been published. This changing landscape is part of the motivation for this submission: we want to consider the tradeoffs of segmentation vs. classification as tools that make labeling less burdensome become increasingly available. That being said, we do not mean to dismiss the high annotation cost required to produce segmentation labels. To address the reviewer’s concerns, we will adjust paragraphs 2 and 4 in the Introduction to more carefully position the paper with respect to the labeling burden of segmentation. Further, we will expand the discussion of labeling cost in Section 5 with more space given to comparative labeling burdens per image. [1] Kirillov, A., et al. (2023). Segment anything. arXiv:2304.02643. [2] Zou, X., et al. (2023). Segment everything everywhere all at once. arXiv:2304.06718. [3] Butoi, V. I., et al. (2023). Universeg: Universal medical image segmentation. arXiv:2304.06131. [4] Wang, C., et al. (2023). SAM-MED: A medical image annotation framework based on large vision model. arXiv:2307.05617. *Q2: Is it possible that the improved performance of the segmentation models over the classification models is simply due to them being inherently more powerful, possibly because they have more parameters?* Thank you for bringing this up, it is important to clarify. Please see the global response to all reviewers where we include two new tables showing that the improved performance is not simply due to increased model parameters. *Q3: Could the authors elaborate on the decision to relegate critical content to the appendix?* Thank you for the feedback. We had trouble fitting the paper into the 9 page limit, and as a result moved a lot of content to the Appendix. As we understand, there is an additional page allowed for the camera ready. With that in mind, we plan on making the following changes based on your (and other reviewers’) feedback and we are open to additional suggestions: - Bring details from Section A2 into the main text in Section 3, including: - Algorithms 1 and 2 (combined into one algorithm box). - Description of summarizing function architectures, including additional descriptions in Section 3 (moved from their current location in Section A2) and depicting the summarizing functions in a new Figure 3. - Expand the “Training” subsection in Section 4 to include model architectures, loss function, and optimizer (currently in Section A5.2). - Move results Table A6 to Main Text. - Expand the Summary of Related Work in the Introduction to be more thorough, while keeping the full Related Works in the Appendix. We agree the Related Works (currently located in the Appendix) is an important section, but it is lengthy (~1.3 pages) in order to be comprehensive, so we do not believe we can put the entire Related Works in the main text. - To make space for these changes, we can compress Sections 5 and 6 as well as remove Figure 2 and related references to the Figure. *Q4: Most figure captions are insufficiently informative, requiring the reader to refer back to the main text. Figure captions should be independent of the rest of the text.* Thank you for this suggestion, we will update all figure captions to be more comprehensive. *Q5: The presentation of the results can be expended, with key information (e.g., number of true positives and false negatives) at this point missing.* Thank you for the suggestion. In the original submission, we have additional performance metrics (balanced accuracy, recall, precision, specificity) in Table A6 for the CANDID dataset, which are indicative of performance beyond the AUROC. In the revised submission, we will include similar tables for the ISIC and SPINE datasets along with raw TP, TN, FP, FN numbers. *Q6: Have the authors considered re-running their results multiple times to ensure they are reproducible?* We are including new results for the CANDID segmentation and classification tasks with different random initializations in the limited data regime (Table R5). We observe a standard deviation of 0.0024 for segmentation-for-classification and 0.0054 for classification AUROC. In this rebuttal period we do not have time to rerun every experiment, but we can continue running experiments to have multiple seeds for the camera ready, if accepted. Additionally, due to reviewer D8HP’s suggestion, we trained multiclass and natural image models with the proposed method. Both of these applications worked out-of-the-box, suggesting the method and findings extend to additional datasets and settings (details in the global response). --- Rebuttal Comment 1.1: Comment: Dear Authors, Thank you for your comments; you have addressed most of my concerns. --- Reply to Comment 1.1.1: Title: Final comment to Reviewer RFD8 Comment: Great, we are glad we were able to address your concerns. Thank you again for your careful read and many suggestions!
Summary: The paper describes a set of insights obtained when using segmentation networks for a classification task. Classification was the task of choice due to issues with obtaining appropriate segmentation labels. However, with wider availability of datasets with appropriate labels, this is no longer the case. To facilitate consideration of using segmentation or classification networks, the paper presents a set of best practices and tradeoffs. Strengths: The manuscript is very thorough. An important problem and best practices to solve it are discussed. In particular, the insights benefit the study of smaller datasets, rare subgroups, and robustness, all of which are important concerns in medical image analysis. The supplementary material is very thorough and includes analysis of a synthetic dataset and three real datasets (X-ray, CT, skin). Finally, some theoretical analysis evaluating separability of positive and negative segmentation and classification classes is provided. I believe the analysis provided in the paper would be helpful to those trying to design an analysis task on medical image data. Weaknesses: After reading through the manuscript and supplementary material, I believe that some of the contents of the supplementary material (e.g., visualizations, choice of datasets) and the description of the pipeline can be summarized in a more detailed overview Figure (instead of Figure 3). While a lot of work (e.g., comparisons across networks) has been put into the project, not all of it is clearly explained in the main manuscript. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Have the authors evaluated the effect of network size (# of parameters) vs performance? It seems like a smaller classification network and a larger segmentation network should in theory learn the same information. Several weaknesses of classification networks and associated strengths of segmentation networks are discussed (e.g., in Figure 4, segmentation nearly always outperforms classification). When is the reverse true? Would there be cases where segmentation networks more easily overfit? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The paper presents several interesting insights into performance of classification via segmentation vs classification networks. However, I am not sure if the presented conclusions will generalize to other datasets/modalities. The paper could do a better job in clarifying the extent of its contributions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time reviewing our manuscript and for your suggestions. We were glad to hear you thought the work was thorough, addresses an important problem, and would be helpful to future readers. We were also glad to receive your comments and suggestions for improving the manuscript. Below, we answer your questions and describe how we have improved our submission as a result of your comments. We would be happy to answer any additional questions or hear additional suggestions. *Q1: After reading through the manuscript and supplementary material, I believe that some of the contents of the supplementary material (e.g., visualizations, choice of datasets) and the description of the pipeline can be summarized in a more detailed overview Figure (instead of Figure 3). While a lot of work (e.g., comparisons across networks) has been put into the project, not all of it is clearly explained in the main manuscript.* Thank you for this suggestion. We had trouble with the page limit and as a result placed a lot of content in the Appendix. We understand there is an additional page allowed for the camera ready. With that in mind, we plan on making the following changes and are open to additional suggestions: - Bring details from Section A2 into the main text in Section 3, including: - Algorithms 1 and 2 (combined into one algorithm box). - Description of summarizing function architectures, including additional descriptions in Section 3 (moved from their current location in Section A2) and depicting the summarizing functions in a new Figure 3. If the reviewer had specific content in mind they think would be useful to include in a modified Figure 3, we would appreciate their input! - Expand the “Training” subsection in Section 4 to include model architectures, loss function, and optimizer (currently in Section A5.2). - Move Table A6 to Main Text. - Expand the Summary of Related Work in the Introduction to be more thorough, while keeping the full Related Works in the Appendix. - To make space for these changes, we can compress Sections 5 and 6 as well as remove Figure 2. *Q2: Have the authors evaluated the effect of network size (# of parameters) vs performance?* This is a good question and we appreciate the suggestion to clarify this point. Please see our new results to answer this question in the global response, which we will include in the Appendix of the revised manuscript. *Q3: Several weaknesses of classification networks and associated strengths of segmentation networks are discussed (e.g., in Figure 4, segmentation nearly always outperforms classification). When is the reverse true? Would there be cases where segmentation networks more easily overfit?* This is an interesting question and one we investigated, though we did not find (either empirically or theoretically) settings within our defined scope (set in Section 2.1) in which segmentation-for-classification achieved worse performance than traditional classification. It is true that with very small datasets segmentation-for-classification models overfit, but the traditional classification networks were also not able to learn a more performant decision boundary in these settings. Outside of the scope we set, for example for tasks which require classifying the global image (e.g., classifying imaging modality), classification may achieve higher performance out-of-the-box than the segmentation-for-classification networks we use in the paper. Future work may investigate how to leverage segmentation networks for these global tasks; we will add discussion on this direction in our future work section. The future work is currently in Appendix A7, but we can move the future work paragraph into the main text’s Conclusion. *Q4: The paper presents several interesting insights into performance of classification via segmentation vs classification networks. However, I am not sure if the presented conclusions will generalize to other datasets/modalities. The paper could do a better job in clarifying the extent of its contributions.* We have included new results in the global response showing that the proposed method can be directly used in additional settings, including natural images and multiclass settings. We will add the multiclass and natural image results to the Appendix of the revised manuscript. To summarize, through our original experiments and this rebuttal, we have shown that the presented conclusions extend to different imaging modalities (RGB images, CT, x-ray), natural and medical images, 2D and 3D images/networks, and single class and multi class tasks. We have also explored trends thoroughly with synthetic datasets (Figure 4), showing settings in which performance differences emerge. Further, our theoretical analysis in Section 2 helps explain and support why one should expect the observed benefits to transfer to other datasets. Please let us know if there are additional generalizations or clarifications you are interested in discussing further. --- Rebuttal Comment 1.1: Title: thank you Comment: Thank you for the comments. The additional results helped clarify my concerns and I raised my rating. I believe papers that provide exploratory analysis of tasks such as the one presented are just as helpful as those with a specific methodological contribution. --- Reply to Comment 1.1.1: Title: Final comment to reviewer rmFZ Comment: Thanks so much for your review and commentary, we are glad we were able to address your questions.
Summary: In this work, the authors explored the applications of deep learning in radiology, specifically focusing on image classification and segmentation tasks. The authors investigated the performance differences between classification and segmentation models on the same dataset and task using an information theoretic approach. They proposed a method called "segmentation-for-classification" that utilizes segmentation models for medical image classification. The authors compared this approach with traditional classification on three retrospective datasets, consisting of 2,018 to 19,237 samples. Through their analysis and experiments, they highlighted the benefits of segmentation-for-classification, including improved sample efficiency, better performance with fewer labeled images (up to 10 times fewer), particularly for low-prevalence classes and rare subgroups (up to 161.1% improved recall). They also demonstrated improved robustness to spurious correlations (up to 44.8% improved robust AUROC), as well as enhanced model interpretability, evaluation, and error analysis. Strengths: Through theoretical analysis, the benefits of using segmentation for learning classification datasets are validated, and extensive analysis and validation are conducted on small datasets. Weaknesses: 1.The proposed method is not specifically designed for medical images; it is a general method but lacks validation in some common settings. 2.The method proposed in the paper is limited to binary classification requirements, which limits its scalability. 3.The method does not introduce a new module specifically designed to extend segmentation for classification tasks, lacking novelty. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: As shown in Figure 4, why is there such a significant difference between segmentation and classification on small datasets? Are the model parameters and training processes for segmentation and classification strictly controlled to be consistent? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Just as mentioned in the weakness section, the method proposed in the paper is limited to binary classification. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your suggestions and questions about our paper. Below, we respond to each of your points to answer your questions, provide new results, and describe how we are updating our submission in response to your comments. We would be happy to answer any additional questions or hear more suggestions during the discussion. *Q1: The proposed method is not specifically designed for medical images; it is a general method but lacks validation in some common settings.* In the global response, we clarify why we focus on medical images as a strong motivating application space. That being said, we agree with the reviewer that the methods can extend to natural images and show segmentation-for-classification improves average AUROC by 18.5% on a natural image dataset in the limited data regime. We will add these natural image results to the Appendix. *Q2: The method proposed in the paper is limited to binary classification requirements, which limits its scalability.* The method is not limited to binary classification; it can directly be used for multiclass settings. Instead of having a binary mask output from the segmentation network, a user simply needs to specify multiple output channels. We provide new experiments on multiclass data in the global response, showing segmentation-for-classification improves average AUROC compared to traditional classification by 18.5% in a natural image multiclass setting and by 22.0% in a medical image multiclass setting. To make sure this misunderstanding does not arise for future readers, we will describe the multiclass case in Section 3 and include the multiclass results in Section A5. *Q3: The method does not introduce a new module specifically designed to extend segmentation for classification tasks, lacking novelty.* Our contributions are in explaining why and when we see performance boosts using segmentation-for-classification (supported by our theoretical analysis), providing best practices for using segmentation-for-classification, and showing the number of tradeoffs between a traditional classification vs. segmentation-for-classification task framing. To develop best practices, we do test eight modules designed to extend segmentation for classification, but find that the simplest deterministic method works best (Figure 5). We believe this simple solution will help boost the usability of segmentation-for-classification, as practitioners can implement it with no extra computational cost or tuning costs as is required by more complex modules. Additionally, we do propose and evaluate a boosted semi-supervision method (Section 4.2.2), which has not been shown before and demonstrates how to adapt semi-supervised segmentation to the segmentation-for-classification setting. *Q4: As shown in Figure 4, why is there such a significant difference between segmentation and classification on small datasets?* Our analysis in Section 2.2 suggests this is because segmentation has a higher KL divergence between classes. Since the data is more separable using a segmentation task framing, the decision boundary between the two classes can be learned with fewer data points—leading to improved performance in the limited labeled data regime. *Q5: Are the model parameters and training processes for segmentation and classification strictly controlled to be consistent?* Thank you for bringing up this point, we agree it is important to clarify. Please see the global response for details on this question and new results on model capacity and performance. We will add these details to the revised manuscript clarifying this point for future readers in Section A5. --- Rebuttal 2: Title: Please acknowledge reading the rebuttal. Comment: Dear reviewer, Please acknowledge reading the rebuttal. One can acknowledge reading the rebuttal by posting an official comment on the open review platform. Best, AC
Rebuttal 1: Rebuttal: We thank the reviewers for their time and helpful suggestions, which helped us strengthen our submission. We were glad to hear the reviewers recognized the benefits and potential impact of segmentation-for-classification (reviewers rmFZ, RFD8, GK2b), found the paper thorough (reviewers D8HP, rmFZ, GK2b), and considered the approach innovative (reviewer RFD8). In our global response, we respond to common questions and report additional experiments. In the individual responses, we address each reviewer’s questions point-by-point. **Method generality.** Reviewers D8HP, GK2B, and rmFZ asked about the method's generality, both within medical imaging and in other areas. First, we highlight our intentional focus on medical imaging. Then, we present additional experiments requested by the reviewers showing the method’s applicability to natural, multiclass, and 3D images. We focus on medical images as segmentation-for-classification helps address problems that are particularly acute in this setting: - Many medical image datasets collected from within an institution are small or contain rare subgroups [1, 2, 3]; we show segmentation-for-classification improves performance in this limited data regime (e.g., up to 16.2% improved aggregate performance; up to 161.1% improved recall on a rare subset). - Sensitivity to spurious correlations is particularly worrisome in low-failure-tolerance applications like medical imaging [1]. We’ve observed segmentation improves robust AUROC up to 44.8%. - Segmentation inherently delivers location information about abnormalities, helping clinicians interpret the results of the classifier in clinical workflows. Thus, we believe medical image analysis is a strong motivating setting and well-suited application space for segmentation-for-classification. The improved performance and reliability in this risk-intolerant setting as well as the close alignment with clinical workflows helps justify the added cost of segmentation, although we note recent work in self-supervision and foundation models helps ameliorate the cost of labeling segmentation data [4, 5, 6]. That being said, we agree with reviewer D8HP that the methods extend to natural images and may be of general interest. To show the proposed methods extend to a natural image dataset, we include new results in Table R1 classifying dog and cat breeds in the Oxford Pets dataset in the limited data regime (50 training images per class). We see segmentation-for-classification improves average AUROC by 18.5%. We will include these results in the revised Appendix. To show the method applies to multiclass settings (reviewer D8HP), the Oxford Pets dataset contains 37 classes of dog and cat breeds; as above, we see segmentation outperforms classification by 18.5%. Further, we extend one of our medical imaging datasets to the multiclass setting and perform 3-class classification on ISIC. Again, we see improved performance with segmentation-for-classification (Table R2). We will describe the multiclass case in Section 3 and include the multiclass results in Section A5. To show the method applies to 3D data, we emphasize the SPINE dataset in our original submission is a 3D dataset, and we observe the same trends in 3D as we do in 2D. **Model capacity.** Reviewers D8HP, rmFZ, and RFD8 ask about differences in model capacity between the classification and segmentation. We first clarify how we selected architectures in the original submission. Then, we present new results showing that model capacity does not explain the observed performance differences. First, we clarify that in the original paper we tested multiple architectures for each task and dataset (Table A4). We did this to give each task and dataset the best performance, as it’s not clear the best architecture for segmentation on a given dataset is the same that is best for classification. To address the reviewer’s questions, we present two new results. In Table R3 we use the same backbone for both segmentation and classification. We use Resnet50 as the 2D backbone as it is a common backbone known to be useful for classification and segmentation. We see that segmentation shows the same or greater performance enhancement than what appeared in the original paper. We further note that all of our synthetic experiments were run with a standardized backbone. However, even when using the same backbone, segmentation models still have more parameters due to the convolutional decoder (as noted by reviewer RFD8). We next provide results comparing performance using a higher capacity classification model—Resnet101, which has more parameters than the Resnet50 segmentation model—in Table R4. Again, we see drastically improved performance with segmentation. Finally, we clarify that the training procedure is matched for classification and segmentation, including matching the input data, augmentations, and codebase (which standardizes model checkpointing, loss function, hyperparameter tuning, etc.). Together, these results show it is not model capacity or training procedure that leads to segmentation-for-classification’s improved performance. We will add these new details to the revised manuscript for future readers. [1] Oakden-Rayner, L., et al. (2020). Hidden stratification causes clinically meaningful failures in machine learning for medical imaging. ACM CHIL. [2] Ng, D., et al. (2021). Federated learning: a collaborative effort to achieve better medical imaging models for individual sites that have small labelled datasets. Quantitative Imaging in Medicine and Surgery. [3] Varoquaux, G., et al. (2022). Machine learning for medical imaging: methodological failures and recommendations for the future. NPJ DM. [4] Kirillov, A., et al. (2023). Segment anything. arXiv:2304.02643. [5] Zou, X., et al. (2023). Segment everything everywhere all at once. arXiv:2304.06718. [6] Butoi, V. I., et al. (2023). Universeg: Universal medical image segmentation. arXiv:2304.06131. Pdf: /pdf/aa41f1cff8761120f57ff0d084dafa8502a3c5d1.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
One-Step Diffusion Distillation via Deep Equilibrium Models
Accept (poster)
Summary: This paper applies Deep Equilibrium Models to distillation of (conditioned) diffusion models. The key part of the architecture consists of a repeated application of weight-tied block of layers on the internal activations (theoretically until convergence to a fixed point, but in practice a few iterations). The model is trained to mimic the mapping from noise to images induced by the diffusion sampling chain. Strengths: The premise of the paper makes sense. One important reason for the success of diffusion models might be that the chain of repeated (though time-varying) neural steps is, in aggregate, able to represent extremely complicated mappings, with near-discontinuous regions packed in complex high-dimensional shapes, etc. – something that is amply present in the latent-to-image mapping induced by the diffusion flow. Capturing this complexity with typical single-pass networks of modest layer counts is challenging, as evidenced by many attempts in the literature. DEQ seems like an appealing and natural approach here, as it has the similar character of iteratively shaping an initially simple distribution into a very complex one. There is also the promise of parameter count savings: effectively they implement a more complex function than just a single layer block, but parametrized on the same amount of weights. The proposed network architectures seem well justified for this task. The evaluation makes a reasonably convincing case that this can be beneficial, and that the promised benefits of DEQs are realized up to a point. There is a decent amount of useful experimentation and attempt to understand scaling around model size and other parameters. Weaknesses: The paper is scarce on details about the DEQ training and inference. The impression I’m left with (based on line 213 and the fact that no details are provided), is the authors do not actually use almost any of the DEQ machinery, such as fixed point solvers and related backpropagation shortcuts. Instead, the model seems to be trained as though it were a regular neural network, where the same block of layers is stacked in sequence six times (but with tied weights between them). This may work well in practice but raises some questions about whether DEQs are as relevant to the story as claimed, and whether some of their potential advantages are left on the table. There does seem to be fixed-point like behavior and capability though, as illustrated in Figure 3a. Nevertheless, this raises the question about some baselines. What if the weights were _not_ tied? Wouldn’t the inference and training remain equal number of flops to what you currently do, but with some modest extra memory consumption for storing the weight values (but importantly, no more or less activations to store)? By default, wouldn’t one expect this to strictly extend the capability of the model? It’s possible that the tying has some implicit regularization effects like discussed on line 224, but this claim is speculative. It might also make the training more effective (as in, converge with fewer iterations), but this is also not clear-cut. The evaluation does not currently explore or rule out the possibility that weight tying could even be harming an otherwise higher performing model. Such a baseline would distinguish whether the benefit comes from the particular architecture and depth, or from the DEQ-like aspects. At present, I am somewhat uncertain about which it is – the current ViT baseline seems sufficiently different that it might underperform for unrelated reasons. This would of course increase the parameter count, but note that parameter count is often taken as a proxy for evaluation time and memory consumption, neither of which would be seriously affected here. That is, unless my understanding is wrong, and the paper actually does use DEQ training mechanisms. In this case, it is missing a significant amount of important detail about how those are implemented and how the associated difficulties are dealt with (this should be clarified in either case). The water is also a bit muddy in terms of how many iterations any given distilled model requires. One could argue that this is in spirit a six-step model (flanked by pre and post-processing to embed in a latent space), or conversely, that e.g. a four-step progressive distillation model is actually just a one-step model that internally calls some layers four times. The more relevant question comes down to flops and wall-clock time rather than the semantics of what we take the iterations to mean. Accordingly, including something like 2 or 4 step PD (which beat the metrics here) in the comparisons would be reasonable, unless it can be convincingly argued that their runtime is significantly slower due to the sub-stepping (I am unsure of whether this is the case). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Beyond the baselines discussed above, the role of the fixed point remains somewhat unclear either way. Does the iteration converge to a fixed point in six iterations? Could it even diverge? If one were to evaluate it all the way to a fixed point, how much extra performance could be squeezed out? Is six steps enough in training to reap these benefits? Miscellaneous observation: are you sure you are citing the correct FID/IS for Progressive Distillation [61] in unconditional CIFAR (Table 1)? If I understand correctly, the number you cite is the conditional model (Table 4 in appendix of [50]), which benefits from the conditioning info despite lack of guidance boost. However, for the unconditional model, Table 2 in [61] cites a FID of 9.12. Wouldn’t this be the correct number (not sure)? Related, I’m not sure why this model is referred to as PD-EDM, and cited in combination with Consistency Models? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: There is a brief discussion of limitations, but it does not go into much depth. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your extremely thoughtful feedback and suggestions. We have tried our best to answer your questions and concerns below. > Does the iteration converge to a fixed point in six iterations? We report the relative fixed point error $\frac{\|| f(z) - z \||} {\|| f(z) \||}$ and the resulting FID score as we vary the number of iterations of the equilibrium transformer within GET-M at inference time. | Iterations | Relative Error | FID | |----------|:-------------:|------:| | 1 | 1.028 | 42.96 | | 2 | 0.334 | 26.70 | | 3 | 0.215 | 20.27 | | 4 | 0.148 | 17.21 | | 5 | 0.107 | 15.77 | | 6 | 0.082 | 15.18 | | 7 | 0.066 | 15.04 | | 8 | 0.055 | 15.18 | | 9 | 0.048 | 15.46 | | 10 | 0.043 | 15.85 | Based on our empirical results, we have indeed converged to a fixed point with a relatively low relative fixed point error of 0.082 within 6 iterations. As a comparison, DEQ-transformer [1] trained with gradients computed via implicit function theorem requires 30 iterations to achieve a relative error of 0.10 on language modeling. We also observe improved FID scores as we increase the depth to reach the depth at training time. The FID score eventually flattens out and marginally increases as we further increase the depth. As the reviewer has noted, it is certainly possible to get improved performance from GET by marginally increasing the depth at inference time. For instance, GET-M’s FID score improves to 15.04 when we increase the iterations at inference time to 7. __Does weight-tying harm GET’s performance?__: The reviewer has valid concerns on whether weight tying harms GET’s performance. To address this concern, we trained and evaluated a non-weight tied architecture. In this architecture, we unroll the equilibrium transformer in GET, effectively using six separate GET blocks, instead of iterating over a single GET block within the equilibrium transformer. This architecture is otherwise exactly similar to GET’s architecture, and thus has the same number of FLOPs. We trained this model for 800k iterations, similar to the training duration of the weight-tied model, and found this model was 16% slower in training, despite having the same FLOPs. Further, This model also achieves an inferior FID score and Inception score compared to the weight-tied model. This outcome underscores that weight-tying within GET indeed contributes to improved performance. | Model | Params | FID | IS | |----------|:-------------:|:----:| :----:| | GET-B | 62.2M | 7.42 | 9.16 | | GET-B-non-WT | 310M | 7.55 | 9.08 | __Comparison of runtime of GET against PD__: To ensure a fair comparison between GET and PD, we need to benchmark GET against PD baseline that uses ViT/DiT architecture. However, we do not have access to a ViT/DiT model that is distilled via PD on CIFAR-10. As a proxy for PD, we report the sampling time of EDM (50k images, 1NFE). A single forward pass through EDM requires 27s, whereas GET-Tiny completes sampling in 14.7s, GET-Mini takes 23.3s, and GET-Small needs 35.7s (all under 6 iterations). Considering that we use significantly fewer teacher model evaluations than PD (35M vs 179M (lower bound) or 1.433B (upper bound)), the performance can potentially be improved by using more training data and by increasing training iterations. As an example, the GET-Tiny with 8.9M parameters and 14.7s sampling time can reach FID of 11.47 and IS of 8.64 (while PD-EDM has IS of 8.69 at > 410M teacher evaluations), given 4$\times$ training iterations on *the same data* (35M teacher evaluations). [1] Bai, Shaojie, J. Zico Kolter, and Vladlen Koltun. "Deep equilibrium models." Advances in Neural Information Processing Systems 32 (2019). > Are you sure you are citing the correct FID/IS for Progressive Distillation [61] in unconditional CIFAR (Table 1)? Thank you for requesting this clarification. In Table 1, we have cited PD [61] at FID 9.12 (second row in Diffusion Distillation). We have also cited guided distillation [50] in Table 2, which is the performance for class-conditional models. The results for PD-EDM in Table 1 have been reported by Consistency Models [70] using EDM and stronger training settings, and we include it as a relevant baseline. We are happy to answer any further questions or concerns that you may have. --- Rebuttal Comment 1.1: Title: Rebuttal response Comment: Thank you for the thoughtful responses and experiments -- these results address my main concerns about potential hidden issues. Incorporating these and other clarifications in the paper and/or appendix will improve the paper. I remain leaning somewhat positive on acceptance.
Summary: The paper proposes a simple approach to distill diffusion models into generative models capable of sampling with just a single model evaluation. The method involves training a Generative Equilibrium Transformer (GET) architecture directly on noise/image pairs generated from a pre-trained diffusion model, eliminating the need for trajectory information and temporal embedding. The experiments verify its effectiveness. Strengths: - The paper is well-written and easy to follow. - The idea of using DEQ for distillation is interesting. Weaknesses: To verify the effectiveness, the results of ImageNet are neccessary, since ImageNet is another mainstream benchmark. Some experimental design are not convincing enough. More details please refer to the questions. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - Could you explain more about the motivation and advantages of using DEQ? - Following the prior question, about the comparison in Table 4: have you tried the diffusion transformer (DiT), which may be more parameter-efficient for denoising process? The same question for the scaling experiments. - In Figure 2(Left), the performance of GET falls off quickly with less iterations in the Equilibrium transformer. Compared to other distillation methods such as PD and CD with single step, what are the advantages? Are there any results about the sampling speed using different iterations between PD(CD) and GET? - Instead of 6 iterations, have you tried two GET models for two stage of denoising steps and used 2 iterations for two GET models sperately?($t$ from 0 to 0.5 and $t$ from 0.5 to 1.0). This may be more efficient than using 6 iterations. - Does GET drop condition during training and support classifier-free guidance? Can this technique improve the performance? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for well-thought questions and valuable feedback. We have tried our best to answer your questions. __Motivation and Advantages of DEQ__: Our motivation to model the student network as a DEQ stems from the observation that the relatively complex process of distilling diffusion models has an element of fixed point process, as indicated by recent works like Consistency Models (Song et al. 2023) and TRACT (Berthelot et al. 2023). In this work, we sought a network architecture capable of adapting its compute requirements according to the complexity of image being generated. DEQs possess both of these capabilities. The empirical results in our paper indeed indicate that modeling the student network (i.e. GET) as a DEQ is vital to achieve good performance in our set-up of one-step distillation. __Benchmarking against DiT__: In Table. 4, the ViT baseline was simplified from the diffusion transformer (DiT). Because our distillation strategy does not require the temporal embedding, we remove this layer from DiT, leading to the ViT baseline shown in Table. 4. __Clarification about one-step generation__: We define one-step generation as the ability to generate an image directly from Gaussian noise in a _single_ forward pass through the network. There can be multiple iterations over the equilibrium transformer during this forward pass but it is important to note that all of these iterations contribute to the depth of the network. In Figure 3(a), we demonstrate how the performance changes with variation in the effective depth of GET. Standard neural network architectures (e.g., ResNet, Diffusion Transformer (DiT)) increase the depth of the network by applying a block multiple times, increasing the number of parameters in the process. In contrast, GET weight-ties the repeated block, preserving the overall number of parameters. __Variation of GET’s performance with iterations__: As shown in Figure 3a (left), GET’s performance improves with more iterations because of the increase in effective depth of the network. However, this computation still constitutes a single forward pass through the network, and thus one-step generation. __Advantages of GET over other distillation methods such as PD and CD with single step__: __Advantages over PD__: GET has several advantages over Progressive Distillation (PD). - _Complexity_: On CIFAR10, PD uses 13 training passes to distill 8192-step DDIM chains to 1 step. In addition, PD also carefully selects distillation targets in each step. - _Performance_: One-step GET (that uses a single forward pass) outperforms PD for both class-conditional and unconditional cases on CIFAR-10 in terms of FID score and Inception Score. - _Efficiency_: On CIFAR10, PD consumed 179M teacher model evaluations (2 DDIM steps * 128 (batch size) * (12 passes * 50k + 100K) = 179M), while GET only consumed 1M samples * 35 NFE = 35M teacher model evaluations. __Advantages over CD__: CD uses far more complex distillation settings compared to GET. As shown in our paper, - _Complexity_: CD leverages trajectory information from the teacher diffusion model and employs perceptual loss (LPIPS), which results in forward passes through 3 models during the distillation process. - _Efficiency_: On CIFAR10, CD requires significantly more teacher models evaluations (512 (batch size) * 800k iterations = 409.6M evaluations) compared to our method which consumes 1M samples * 35 NFE = 35M teacher model evaluations. - _Data requirement_: CD assumes access to the dataset on which the original diffusion model was trained. In contrast, GET can be trained fully on synthetic data. This is beneficial in cases where we do not have access to the original dataset on which the diffusion model was trained. - _Auxiliary loss_: CD uses LPIPS perceptual loss as an auxiliary loss during training. In the absence of this loss, CD’s performance drops considerably (even doubles). Please see Figure 4 in CD’s paper. __Sampling speed of GET against PD and CD__: To ensure a fair comparison between GET and PD, we need to benchmark GET against PD baseline that uses ViT/DiT architecture. However, we do not have access to a ViT/DiT model that is distilled via PD on CIFAR-10. Similar concerns hold true for CD. As a proxy, we report the sampling time of EDM (50k images, 1 NFE). A single forward pass through EDM requires 27s, whereas GET-Tiny completes sampling in 14.7s, GET-Mini takes 23.3s, and GET-Small needs 35.7s (all under 6 iterations). Considering that we use significantly fewer teacher model evaluations than PD (35M vs 179M (lower bound) or 1.433B (upper bound)), the performance can potentially be improved by using more training data and by increasing training iterations. As an example, the GET-Tiny with 8.9M parameters and 14.7s sampling time can reach FID of 11.47 and IS of 8.64 (while PD-EDM has IS of 8.69 at > 410M teacher evaluations), given 4$\times$ training iterations on *the same data* (35M teacher evaluations). In case, there is a publicly available model that the reviewer would like us to benchmark against, we are happy to evaluate it. __Two stage denoising with GET__: GET learns a direct mapping between noise and image pairs without any use of temporal embeddings. As a result, we cannot stack two GET models to get two-state denoising. Further, in our paper, we demonstrate empirically that GET benefits from its DEQ architecture where we solve for a fixed point. Reducing the number of iterations in the Equilibrium transformer to 3 will lead to poor convergence of this fixed point, and might adversely affect the performance of GET. __Classifier-free guidance while training GET__: Thanks for this suggestion! It is certainly possible to train GET with classifier guidance. This is an interesting experiment that we will try in the future. --- Rebuttal Comment 1.1: Title: Thanks for your responses. Comment: Thanks for the responses and experiments. The overall idea is quite interesting and I think my questions have been well addressed. Therefore, I am willing to increase my score to positive side.
Summary: The submission proposes the Generative Equilibrium Transformer (GET), a lightweight refinement of vision transformer that is well-suited as an efficient single-step student model for diffusion distillation. The author empirically shows that the GET outperforms classic networks in terms of performance, model size and inference efficiency. Overall, the introduced architecture improvements shed light on further studies for improving student models of diffusion distillation. Strengths: 1. The introduction of DEQ for efficient students is novel: The idea of introducing the DEQ model into the design of a more efficient student model is novel. Till now, most diffusion distillation works take the same architecture of the student model as the teacher model. Few of them have investigated the design of an efficient student model, which is potentially important for applications with strict requirements on inference efficiency. The author brings the DEQ model to design a ViT-based encoder-decoder student network and shows promising performance on CIFAR10 and ImageNet64 data. 2. Good Writing: The writing of this paper is clear and easy to follow. Weaknesses: Insufficient evaluations: the paper proposes an architecture-level improvement, the GET, that is related to ViT. The readers will be more convinced that the GET should be compared with the ViT-based diffusion model on modeling tasks on a large scale, such as the DiT on ImageNet128 and 256. I think the pros and cons of the proposed GET can be observed more clearly for large-scale benchmarks. Besides, the one-step student model shows an FID of 6.91 on the CIFAR10 dataset, which is significantly worse than Consistency Distillation which has an FID of 3.55. I am curious about the possibility to combine GET with other distillation methods such as CD to obtain stronger performance. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See the weakness above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The author has discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your encouraging feedback! Please find our responses to your questions and concerns below. __Extensive Large Scale Evaluations__: It is certainly possible to scale up to larger datasets like ImageNet but this would require significantly more computing resources. For example, even training Progressive Distillation on ImageNet 64 would consume 5 days on 64 TPUv4 chips, which is equivalent to/more than 7680 A100 hours. We have tried our best to demonstrate the scaling capabilities of GET in terms of training FLOPs and parameters (See Figure 3a/b) to support the possibility of transferring GET to more complex settings. __Comparison with CD__: Thanks for pointing out that CD is a strong baseline. However, CD uses far more complex distillation settings compared to GET. As shown in our paper, - _Complexity_: CD leverages trajectory information from the teacher diffusion model and employs perceptual loss (LPIPS), which results in forward passes through 3 models during the distillation process. - _Efficiency_: On CIFAR10, CD requires significantly more teacher models evaluations (512 (batch size) * 800k iterations = 409.6M evaluations) compared to our method which consumes 1M samples * 35 NFE = 35M teacher model evaluations. - _Data requirement_: CD assumes access to the dataset on which the original diffusion model was trained. In contrast, GET can be trained fully on synthetic data. This is beneficial in cases where we do not have access to the original dataset on which the diffusion model was trained. - _Auxiliary loss_: CD uses LPIPS perceptual loss as an auxiliary loss during training. In the absence of this loss, CD’s performance drops considerably (even doubles). Please see Figure 4 in CD’s paper. __Combining GET with other distillation methods__: We agree that combining GET with other distillation techniques like consistency distillation is a promising direction. However, pursuing this avenue necessitates training GET as a diffusion model with denoising loss, and followed by distilling it into a few-step generative model using CD or PD. This process will require multiple model training phases, rendering the overall process more complex. This complexity runs contrary to our initial motivation of using a DEQ-based architecture for diffusion distillation in a single-forward pass. We are happy to answer any follow up questions that you might have. --- Rebuttal Comment 1.1: Comment: Thank you for authors’ reply. I understand that large-scale experiment may be a too strict experiment in the rebuttal period. I think the submission has demonstrated clearly its proposed GET architecture and made enough explorations on each component of GET on CIFAR10 dataset. Overall, their GET have shown promising performances with significantly fewer parameters and computational costs (both for training and inferences) than default UNet architectures. I think the work of GET may have high compact on elucidating the space of student models for diffusion distillation, so the work has significant novelty. Combining with its neat writing and clear motivation, I decide to raise my score to 6 to recommend the submission for acceptance. I think it would be nice (not necessary), if the author could use their proposed GET for training diffusion models (potentially with additional time-embedding layers), instead of distilling existing diffusion models, to explore the possibility of the GET architecture on more general applications.
Summary: This paper proposes a new model, called Generative Equilibrium Transformer (GET). GET is a deep equilibrium model, trained as an implicit model, to match noise/image pairs generated with a pretrained diffusion model, and thereby distill that pretrained (multi-step) diffusion model into a fast, single-step approach. Experiments on CIFAR-10 demonstrate that this approach is able to distill diffusion models into a fast, efficient architecture that strictly outperforms an explicit implementation. Strengths: This is a nice paper, with promising results on distillation of diffusion models into very fast, efficient models. Although only conducted on a relatively small-scale dataset, experiments hint at scalability, efficiency; strictly outperforming an explicit counterpart (implemented as a ViT). Weaknesses: Overall, this is a strong paper with thorough experiments. A few points, however, remain unaddressed: - How does the approach depend on the number of noise/data samples synthesized with the pretrained diffusion model? For conditional models, do we need to adapt the sampling strategy? - While experiments on scalability of the model size are conducted on a small scale, the paper would benefit massively from applying the method to some larger pretrained model, ideally on a more demanding task like text-to-image (although intermediate steps on either (i) higher-resolution images, e.g. LSUN, or FFHQ, or more complex tasks like class-conditional ImageNet could suffice). For further comments, please refer to the "Questions" section. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Can the DEQ approach be used to distill a pretrained UNet (like Stable Diffusion) into a DET? - Why no experiments on convolutional architectures, which are very widely used in the community? For example, could a DEQ be initilaized from a pretrained Unet? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: adressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are encouraged to know that the reviewer feels that this paper is strong with thorough experiments! We have tried our best to answer your questions below. __Variation of performance with number of samples__: Given this is a supervised learning set up, we anticipate that GET’s generalization will improve as we increase the number of noise/image pairs, up to a certain threshold, beyond which saturation is likely to occur. To test this hypothesis empirically, we sampled an additional 1M data pairs for GET in the conditional setting and trained GET-B using the total 2M data. The resulting model achieved an FID of **5.66** and IS of **9.63**, while the model trained on 1M data has an FID of 6.23 and IS of 9.42. We believe that further scaling up of training data is certainly possible. GET might also potentially benefit from better supervision through perceptual loss. __Sampling strategy for conditional models__: For class-conditional sampling, we need to provide the class label in addition to the initial Gaussian noise as inputs to GET. GET generates images in a single forward pass for both class-conditional and unconditional cases. As GET learns a direct mapping from the initial noise and conditioning to the resulting images, we don’t need to adapt a special sampling strategy to generate class-conditioned images. This is a potential advantage of GET over distillation methods like PD in conditional settings. __Large scale scalability experiments__: We agree that conducting experiments on larger datasets would indeed provide more comprehensive insights into the scaling capabilities of GET. However, we lack computational resources to perform such large scale experiments. For instance, consistency distillation uses 64 Nvidia A100 GPUs for experiments on ImageNet-64x64 and LSUN 256x256. Progressive distillation reports use of 64 TPUv4 chips for their large scale experiments on ImageNet-64X64 and LSUN 256X256. __Can the DEQ approach be used to distill a pretrained UNet (like Stable Diffusion) into a GET?__ We can distill any pretrained diffusion model regardless of the network architecture into GET as long as we have noise/image pairs from the pretrained diffusion model. __Experiments on convolutional architectures like UNet__: Offline distillation for convolutional architectures has been explored by previous works like knowledge distillation (Luhman & Luhman, 2021). Knowledge distillation distills DDIM sampling chains with 1000 steps and uses UNet architecture. We outperform knowledge distillation on metrics like FID Score and Inception Score. __Initializing GET with pretrained UNet__: While we cannot initialize GET network with weights of a pretrained UNet, we can use noise/image pairs from (multi-step) UNet to distill diffusion trajectories into GET. Please let us know if you have any follow up questions and other concerns. We are happy to answer them! --- Rebuttal Comment 1.1: Comment: Thanks for your response! Let me clarify my question on "Initializing GET with pretrained UNet". What I meant is, why not apply DEQ-style training to a convolutional architecture? This should be possible in my understanding but happy to be corrected here. I still think that the paper would benefit very much from being applied to either (i) distillation of a strong, pretrained txt2img model or (ii), and as mentioned by other reviewers, to high-res datasets. --- Reply to Comment 1.1.1: Title: Reply to Reviewer GPxt Comment: Thank you for your insightful suggestions and for your clarification on "initializing GET with pretrained UNet"! We apologize for the misunderstanding regarding your question. We sincerely hope to address your question regarding the convolutional scheme. We agree that DEQ-style architecture training can indeed be extended to convolutional designs. It is a bit non-trivial to convert UNet into a DEQ, as it has cross-connections across upsampling and downsampling blocks, which also contributes to its efficiency for image generation tasks. While we did experiment with the MDEQ [1] architecture, a purely convolutional design, this architecture did not perform on par with UNet/ViT for image generation. The UNet structure, particularly within the realm of diffusion models, has been iteratively refined by researchers over time. Converting a UNet into the DEQ framework would require designing a proper input injection scheme (as it has multiple downsampling and upsampling stages and cross-stage connections) and improving its gradient flow in the backward pass. Thus, adapting the DEQ to convolutional designs remains a promising and challenging topic for future research. Further explorations are required to pinpoint the optimal architectural elements like the injection layer that could potentially outperform, or at least match, the convolutional UNet. In this work, we thus decided to focus on GET and demonstrate its benefits in the context of distilling diffusion models. [1] Bai, Shaojie, Vladlen Koltun, and J. Zico Kolter. "Multiscale deep equilibrium models." Advances in Neural Information Processing Systems 33 (2020): 5238-5250. We appreciate your suggestions on text2img generation and experiments with high-resolution datasets! However, these experiments demand substantial computational resources beyond our current reach. For example, even with ImageNet 64x64 images, progressive distillation requires 5 days of training across 64 TPUv4 chips, approximately 7680 TPU hours. Considering a similar amount of A100 hours, this leads to a cost of over **8000$ for a single trial** using cloud GPUs. Nonetheless, we've made every effort to showcase GET's scalability and parameter efficiency on CIFAR-10. In addition, we would like to do the code extension for ImageNet and seek collaborative community-drive efforts to execute large-scale distillation experiments. Thank you again for your valuable suggestions! We hope this discussion can further answer your questions.
Rebuttal 1: Rebuttal: We thank all the reviewers for their thoughtful comments and suggestions. Here, we address some of the common concerns raised by the reviewers. __What does one-step generation mean?__ We define one-step generation as the ability to generate an image directly from Gaussian noise in a _single_ forward pass through the network. While there can be multiple iterations within the equilibrium transformer in this single forward pass, all of these iterations contribute to the “effective depth” of the network. __Evaluation on larger datasets__: We agree with the reviewers that experiments on larger datasets like ImageNet will significantly strengthen the results. However, scaling up to ImageNet would require significantly more computing resources. For comparison, consistency distillation uses 64 Nvidia A100 GPUs for experiments on ImageNet-64$\times$64 and LSUN 256$\times$256. Progressive distillation needs 5 days on 64 TPUv4 chips for experiments on ImageNet-64$\times$64 and LSUN 256$\times$256. This is equivalent to/more than 7680 A100 GPU hours. We have tried our best to demonstrate the scaling capabilities of GET in terms of training FLOPs and parameters (See Figure 3a/b) to support the possibility of transferring GET to more complex settings. __New result on teacher network evaluations__: Offline distillation with GET requires significantly fewer teacher model evaluations (35M) compared to progressive distillation (179M (lower bound) or 1.433B (upper bound)) and consistency distillation (409.6M).
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a new model architecture based on deep equilibrium models and applies it for distilling pretrain diffusion models, by generating samples offline and only utilizing the noise-sample pairs generated by the diffusion models. The architecture consists of embedding, injection transformer, equilibrium transformer and decoder. Empirical results show that the proposed approach leads to comparable performance to the other online distillation approaches, and the architecture is more efficient than ViT architecture in terms of training efficiency, model capacity and inference speed. Strengths: - The paper is clearly written and easy to follow. - The paper shows that deep equilibrium model can be a competitive model class in the context of distilling diffusion models. - Extensive comparison with ViT-based models demonstrates the effectiveness of the GET architecture. Weaknesses: - It's not clear to me why the work proposes to apply GET to this specific problem of distilling diffusion models. In principle, this architecture can serve as training a diffusion model directly similar to DiT. And I'm not convinced that offline distillation of diffusion models is a promising direction to work with, given the expensive sampling time the diffusion models need to generate the offline data (and that's why we need distillation). It would be more convincing if the work can show the effectiveness of the proposed architecture in a broader context, or explaining why the proposed architecture is especially useful for distillation. - in l.228 it is claimed that other distillation approaches use much larger data budget but I don't think this comparison is fair enough, as the other approaches only require one functional evaluation for generating one sample, while this approach requires go through the whole sampling process of the diffusion model. I'd like to see the comparison of how many functional evaluations of the teacher model is required for different approaches. - I'd like to see the comparison of training/sampling speed with other distillation approaches as increasing sampling speed is the main motivation for doing distillation. I'm a bit concerned that the fixed point iteration would hinder the training/sampling speed compared to the original U-Net architecture. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How is z initialized at the start of fix point iteration. Is it sensitive to the initialization of z? - In table 4, would be great to include the sampling speed as well. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review and good questions. Please find our responses to your questions and concerns below. __Motivation to use DEQ for distillation__: Our motivation to model the student network as a DEQ stems from the observation that the relatively complex process of distilling diffusion models has an element of fixed point process, as indicated by recent works like Consistency Models (Song et al. 2023) and TRACT (Berthelot et al. 2023). Ideally, we want a network architecture that has the ability to adapt its compute requirements to tradeoff perceptual quality for compute. DEQs possess both of these capabilities. The empirical results in our paper indeed indicate that modeling the student network (i.e. GET) as a DEQ is vital to achieve good performance in our set-up of one-step distillation. __Why offline distillation?__ Our primary goal is to demonstrate a relatively simple technique for distillation that does not use any trajectory information but can still achieve comparable performance to much more complex distillation techniques like progressive distillation. We generate data (noise/image pairs) in an offline manner to reduce the overhead due to sampling during training. Further, this data generation needs to be done only once, and the generated data can be reused later. Offline distillation with GET also requires fewer teacher evaluations compared to progressive distillation and consistency distillation, as we demonstrate in the next point. __Comparison of overall number of functional evaluations of teacher model__: Progressive distillation requires a significantly higher number of function evaluations for the teacher network than offline distillation. We state the exact number of function evaluations of teacher model below: **GET**: 1M (data samples) * 35 (NFEs for EDM sampling) = 35M **Progressive distillation** (assuming batch size of 128 for each of 8 TPUs): 2 DDIM steps * 128 (batch size) * 8 (TPUs) * (12 passes * 50k (iterations) + 100K (iterations for the last pass of 2 step to 1 step)) = 1.433B **Progressive distillation** (assuming batch size of 128 is shared across 8 TPUs): 2 DDIM steps * 128 (batch size) * (12 passes * 50k + 100K) = 179M **Consistency distillation**: 512 (batch size) * 800k (iterations)= 409.6M. In addition, the perceptual loss requires the same number of network evaluations as the teacher model. Note that progressive distillation uses DDIM with 8192 steps for CIFAR10. __Broader experiments with GET as a general architecture__: We acknowledge that GET is a general architecture that might have potential advantages in other applications. Our application of GET in distilling diffusion models was motivated by the inherent fixed point solving mechanism of the distillation process. We empirically demonstrated advantages of GET architecture in distillation over the non-weight tied counterparts. We leave the investigation of performance benefits of GET in other potential applications as future work. __GET as a backbone architecture for diffusion models__: While both GET and DiT are inspired by ViT, a primary architectural difference lies in the absence of temporal embeddings in GET which makes it tricky to use the current GET as a backbone architecture to train diffusion models from scratch. While it is feasible to incorporate temporal embeddings into GET and train it using denoising loss objective, conventional training of diffusion backbone architectures with this loss objective involves denoising images at a single time step. As a result, the advantages of the fixed point solving mechanism used in DEQs might be less pronounced in this case. In contrast, the process of distilling diffusion trajectories inherently involves a fixed point solving mechanism, which allows the benefits of fixed point solving mechanism to be better highlighted in this specific application. __Comparison of training and sampling speed with other distillation methods__: To ensure a fair comparison between GET and PD, we need to benchmark GET against PD baseline that uses ViT/DiT architecture. However, we do not have access to a ViT/DiT model that is distilled via PD on CIFAR-10. As a proxy for PD, we report the sampling time of EDM (50k images, 1NFE). A single forward pass through EDM requires 27s, whereas GET-Tiny completes sampling in 14.7s, GET-Mini takes 23.3s, and GET-Small needs 35.7s (all under 6 iterations). Considering that we use significantly fewer teacher model evaluations than PD (35M vs 179M (lower bound) or 1.433B (upper bound)), the performance can potentially be improved by using more training data and by increasing training iterations. As an example, the GET-Tiny with 8.9M parameters and 14.7s sampling time can reach FID of 11.47 and IS of 8.64 (while PD-EDM has IS of 8.69 at > 410M teacher evaluations), given 4$\times$ training iterations on *the same data* (35M teacher evaluations). In case, there is a publicly available model that the reviewer would like us to benchmark against, we are happy to evaluate it. > In Table 4, it would be great to include the sampling speed as well. The sampling speed of ViT-B is 21.66 secs, and that of GET-Mini is 17.18 secs (4 iterations), 20.16 secs (5 iterations) and 23.27 secs (6 iterations), respectively. We will update Table 4 to include these sampling speeds. __Initialization of z at the start of fixed point iteration__ : We initialize z at the beginning of fixed point iterations by sampling it from the standard Gaussian distribution. We expect the fixed point iterations to show limited sensitivity to the initial value of z, provided that this value is reasonably set. Please reach out to us in case you have any follow up questions/concerns. --- Rebuttal Comment 1.1: Comment: Thank the authors for addressing my questions in the rebuttal. I decided to increase my score from 4 to 5, mainly due to the clarification of motivation, and the teacher NFE comparison with existing distillation approaches.
Summary: Distillation of diffusion models into smaller models that require fewer steps for generation is an important topic in current research. The authors propose to use a deep equilibrium model as the student. In particular, the student model is a Generative Equilibrium Transformer (GET) that consists of two main modules -- (a) an Injection Transformer that outputs a noise embedding and the (b) Equilibrium Transformer that is trained using the fixed point iteration where the fixed point is the latent embedding of a clean image, which when passed through an image decoder outputs the image output by the diffusion model. GET is trained using a simple L1 loss between the output of GET and the image generated by the diffusion model. At test time, a few steps of performing forward passes through the equilibrium transformer seem sufficient to generate a clean image. The paper also performs a series of experiment demonstrating advantages in terms of data and parameter efficiency, sampling speed etc. compared to existing methods. Strengths: 1. The paper is easy to understand at the implementation level and well-written. 2. The authors perform a good set of experiments to understand the trade-offs between data and parameter efficiency, sampling speed, training compute and generation quality. These experiments put the proposed method favorably compared to existing methods. 3. In particular, with only about three forward passes through the distilled model, with a fraction of the parameters and sampling time compared to a ViT, better FID scores are observed using the proposed method. Weaknesses: 1. I am not sure if the proposed method can be called one-step generation. My understanding is that some steps through the equilibrium transformer are needed before getting close to convergence to the fixed point. This is also shown in Figure 3. Perhaps some clarification is needed as to what the one-step means in the title. The equilibrium model is trained to produce an image from noise directly, but it still needs multiple passes to get a good quality image. 2. CIFAR-10 is used for the experiments. While the generated images look okay, it is hard to determine their quality as they are small. Evaluation on just CIFAR-10 is weak in my opinion. A dataset with larger images should make the results stronger. 3. Some of the design choices seem arbitrary. For example, why is a separate Injection Transformer necessary? Why the student model has to be a Deep Equilibrium Model is also not clear from the paper. What about training to produce the fixed point is important for distillation? These questions can be partly answered with ablation analysis, but some intuition should also be provided. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see weaknesses above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes, limitations have been adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your encouraging feedback! We have tried our best to address all your questions and concerns below. __Clarification about one-step generation__: We define one-step generation as the ability to generate an image directly from Gaussian noise in a _single_ forward pass through the network. There can be multiple iterations over the equilibrium transformer during this forward pass but it is important to note that all of these iterations contribute to the depth of the network. In Figure 3(a), we demonstrate how the performance changes with variation in the effective depth of GET. Standard neural network architectures (e.g., ResNet, Diffusion Transformer (DiT) ) increase the depth of the network by applying a block multiple times, increasing the number of parameters in the process. In contrast, GET weight-ties the repeated block, preserving the overall number of parameters. __Evaluation on larger datasets__: We agree that large scale experiments will provide more comprehensive insights into the capabilities of GET. However, scaling up to ImageNet would require significantly more computing resources. For comparison, consistency distillation uses 64 Nvidia A100 GPUs for experiments on ImageNet-64X64 and LSUN 256X256. Progressive Distillation reports use of 64 TPUv4 chips for their large scale experiments on ImageNet-64X64 and LSUN 256X256. __Clarification about design choices within GET__: - __Necessity of the Injection Transformer__: The reviewer has a valid concern about the role of the injection transformer. To address this concern, we trained a model that does not have the injection transformer, but is otherwise architecturally similar to GET. We report the results in the table below. | Model | Params | FLOPS | FID | IS | |----------|:-------------:|:---:|:----:|:----:| | GET-M | 19.2M | 15.2G | 10.72 | 8.69 | | GET-M-no-inj | 10.7M | 15.5G | 14.10 | 8.55 | Thus the injection transformer improves GET’s performance. - __Motivation of the student model to be a DEQ__: Our motivation to model the student network as a DEQ stems from the observation that the relatively complex process of distilling diffusion models has an element of fixed point process, as indicated by recent works like Consistency Models (Song et al. 2023) and TRACT (Berthelot et al. 2023). Ideally, we want a network architecture that has the ability to adapt its compute requirements to tradeoff perceptual quality for compute. DEQs possess both of these capabilities. The empirical results in our paper indeed indicate that modeling the student network (i.e. GET) as a DEQ is vital to achieve good performance in our set-up of one-step distillation. We also include new empirical results that we report in the table below which show that a weight-tied GET outperforms the non-weight tied counterpart in terms of both FID and IS, despite having significantly more parameters. We note that this non-weight-tied model has the same FLOPs as the original weight-tied model. This outcome further underscores that weight-tying within GET indeed contributes to improved performance. | Model | Params | FID | IS | |----------|:-------------:|:----:| :----:| | GET-B | 62.2M | 7.42 | 9.16 | | GET-B-non-WT | 310M | 7.55 | 9.08 | - __Why train for fixed point iterations?__ As mentioned in the previous point, modeling GET as a DEQ that solves for the fixed point helps to achieve superior one-step distillation compared to its non-weight tied counterpart. Figure 3 in the main paper also corroborates this, as it indicates that fixed point iterations lead to improved FID scores. We are happy to answer any follow up questions and address any concerns you may have. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: I thank the authors for their response. I acknowledge that I have read their response to my comments as well as those of the other reviewers. I think what the authors mean by one-step generation is now clear. As for some important design choices -- the injection transformer and the student model being a deep equilibrium model -- the ablation experiments are certainly helpful and appreciated. However, the intuitions still remain unclear and tenuous. The NFE comparisons with other competing models is also a very good result. Overall, the paper has some good ideas and good results, so I am recommending a Weak Accept.
null
null
null
null
Convergence of Alternating Gradient Descent for Matrix Factorization
Accept (spotlight)
Summary: This problem considers a matrix factorization problem: it seeks to minimize a function of the form $f(X,Y) = 1/2 \Vert A - XY^T \Vert_F^2$. In general, matrix factorization problems have applications to matrix sensing, phase retrieval, and are seen as prototypical non-convex optimization problems. The special case of the matrix factorization problem provided in this paper is very simple, and should be seen as a first theoretical step to understand other matrix factorization problems with more practical applications. The contribution of this paper is to propose a particular initialization of the optimization problem that induces an improved convergence rate of alternating gradient descent to recover the matrix $A$. This is supported by both rigorous results and simulations. Strengths: The paper is clear and well-written. I checked the proofs of the main text and believe they are correct. I think the paper addresses an interesting problem in matrix factorization, but I am not convinced by the specific initialization that they propose. This is detailed below. Weaknesses: I would like the authors to discuss more the motivations underlying their paper. My understanding is that factorizing a matrix $A$ in a product $XY^T$ is not hard, for instance one can choose $X=A$ and $Y = Id$. Of course, it is interesting to see if gradient descent is able to recover a factorization of the matrix $A$, given that there is non-convexity and non-smoothness. However, it seems to me that the initialization that you are proposing is similar to the solution $X=A$ and $Y = Id$. In this sense, the papers [YD21] and [JCD22] are stronger as they prove convergence from a Gaussian initialization, that seems further away from the solution. Of course, a contribution of your paper is to improve the convergence rates. However, it does not seem to me that obtaining optimal convergence rates is an identified challenge in the literature. For instance, you compare your convergence rate to the rate of [JCD22], but again these authors use a more challenging initialization, and prove that incremental dynamics appear, which is much harder than proving a simple convergence rate. In light of the above points, I think that the comparison with related works in Section 3.1 is a bit unfair. Given that this model is toyish, it is important to focus on behaviors that we believe to generalize on more sophisticated (matrix multiplication, non-convex) models. While incremental learning is one of them, I didn't understand whether an improved convergence with an atypical initialization satisfies this criteria. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Given that the convergence rates are exponential, wouldn't logarithmic y-axes be more suitable to plot the evolution of the errors? - I think it is a bit tough to skip the proof of Lemma 4.1. The least would be to explain that X -> f(X,Y) is a convex ||Y||^2-smooth function and thus you can use classical (but not really "direct"!) manipulations from convex optimization to derive the bounds. - Could you comment on why you are using alternating gradient descent rather that full gradient descent? Is it related to using only marginal smoothness in Lemma 4.1? Could your techniques be adapted for full gradient descent? - l.312: Are you using different Gaussian matrices for X_0 and Y_0? - l.319: Can you comment on the very small value for \nu? - Figure 3: It seems that the benefits of the proposed initialization reduces when using large stepsizes. Could you try even larger stepsizes? Can you push simulations to the limit of stability of the gradient descent equations? Typos - l.407: "non-asympototic" -> "non-asymptotic" - l.80: "and" -> "an" - l.263 and footnote same page: "PL-equality" -> "PL inequality"? - l.280: "interation" -> "iteration" - l.300: inconsistent notation for the transpose Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 1 poor Limitations: See weaknesses section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
null
Summary: This paper provides a new analysis for the convergence of alternating gradient descent on the low-rank matrix factorization problem $min_{X,Y} f(X,Y) = ||XY - A||_F^2$. The authors show that, by warm-starting the solution using the target matrix $A$, and appropriate step size scaling, they can achieve $\epsilon$ error in $f$ in $O(\kappa^2 \log \frac{||A||_F}{\epsilon})$ iterations of alternating gradient descent, where $\kappa$ is the pseudo-condition number of $A$, as long as the rank of $XY$ is larger than the rank of $A$ by a constant factor. This improved the best known dependence of $\kappa$ from $\kappa^3$ to $\kappa^2$. Also, in contrast to some previous works, the initialization step does not require computing an SVD of $A$. Strengths: - The main technical contribution of this paper is the introduction and analysis of an asymmetric warm starting rule for matrix factorization. To the best of my knowledge this is an original technical contribution. - The theoretical analysis is interesting and advances the state of the art. Matrix factorization is a fundamental optimization task that among others, can be viewed as a subproblem of neural network training. As a result, progress in theoretical understanding of optimization algorithms is highly relevant. - The experimental results are interesting and show a surprising quality gap between different initialization methods. - The paper is very well written and technically sound. Weaknesses: - It seems to me like the assumption in line 454 that $V^\top \Phi_1$ has i.i.d entries is incorrect, and it only has i.i.d columns. In this case, a different lemma should be used in place of Proposition A.1. Does this change anything in the bounds? - I have some comments about the experiments in Section 6. It seems like the authors compare different algorithms using the same step size, however in my opinion it would be fairer to compare using the best fixed step size for each algorithm. This is especially so because of the special imbalanced step size setup in the authors' algorithm. For example, in Figure 3, Random and ColSpan significantly improve when increasing the step size from $0.001$ to $0.1$. What happens if the step size is increased even more? Can we rule out that the improvement in convergence is because of the step size magnitude? Some typos: - decent -> descent in multiple theorem/lemma statements. - Line 231: it should not be a strict subset Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Could the authors give some intuition behind the importance of imbalanced scaling? I understand that it is used to balance the norms of $X$ and $Y$, but the fact that it is so significant in the experiments is somewhat surprising so I am wondering if there is something deeper. 2. Could this approach be used in the noisy setting, i.e. when $A$ is not exactly low rank? What are the difficulties? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Comment: I thank the authors for their response, they have covered my questions.
Summary: The authors explore the use of alternating gradient descent (AGD) with a fixed step size for asymmetric matrix factorization. They demonstrate that a finite number of AGD iterations can achieve a high-probability -optimal factorization, even when starting from an asymmetrical random initialization. Empirical evidence supports the significant convergence rate improvements resulting from their proposed initialization. The authors' proof leverages a uniform Polyak-Lojasiewicz (PL) inequality and Lipschitz smoothness constant, providing a simpler approach for analyzing nonconvex low-rank factorization problems. The findings in this paper challenge the conventional belief that symmetrical randomization is necessary when the underlying data structure allows for symmetry. This novel result, supported by solid theoretical justification, showcases promising improvements and highlights the potential for broader impacts in machine learning and optimization. Strengths: - Excellent novelty - Theoretical proof is elegant - Promising numerical results Weaknesses: It is worth noting that there are numerous competitors beyond first-order methods in the field to solve an exact rank-r matrix decomposition problem. To further enrich the discussion, it would be valuable if the authors explored potential avenues for generalizing their results. For example, investigating the application of their findings in matrix completion or the singular value decomposition (SVD) of a low-rank matrix with added noise could provide insights into the broader applicability of their approach. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: There are two parts to the initialization. One is the $A\Phi_1$ vs $\Phi_2$, another is $\sqrt{\eta}$ vs $\frac{1}{\sqrt{\eta}}$. I am curious whether $\sqrt{\eta}$ vs $\frac{1}{\sqrt{\eta}}$ is the major factor for bringing improvements. For example, what are the results if the authors use symmetrical random initialization with $\sqrt{\eta}$ on one side and $\frac{1}{\sqrt{\eta}}$ on another side? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: I don't foresee any potential negative societal impact resulting from this research. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
null
Summary: This paper proves an improved convergence bound on alternating gradient descent for asymmetric matrix factorization problem, i.e. given $A \in R^{m \times n}$, finding $X \in R^{n \times d}$ and $Y \in R^{m \times d}$ that minimize $||XY^{\top} - A||_F^2.$ This paper establishes that if $A$ is rank $r$ for some $r < d,$ (so the optimal solution is 0) then the algorithm converges to a solution with error at most $\varepsilon ||A||_F^2$ in roughly $\frac{d}{(\sqrt{d}-\sqrt{r})^2}\kappa^2 \log \frac{1}{\varepsilon}$ iterations, where $\kappa = \frac{\sigma_1(A)}{\sigma_r(A)}$. Thus, if the problem is constantly over-parameterized, i.e. $d-r = \Omega(r),$ this gives a dimension independent convergence bound. A crucial piece in their algorithm and analysis is starting from an unconventional asymmetric initialization, where $X$ is initialized as $\frac{1}{\sqrt{\eta}}A\Phi_1$ and $Y$ as $\sqrt{\eta}\Phi_2,$ where $\Phi_1, \Phi_2$ are (normalized) iid Gaussian matrices, and $\eta$ is the step size. The paper also reports several numerical simulations demonstrating the significant advantage of their initialization. Strengths: The (asymmetric) matrix factorization problem is both an important problem in itself, and an important test bed for developing techniques to analyze non-convex optimization. I think this paper takes a significant step towards understanding (alternating) gradient descent in this setting. It improves on the previous best known bound for GD which was ~ $\kappa^3.$ I find the improvement due to the asymmetric initialization quite illuminating, and I think it will inspire other advantageous (yet quick) initializations for descent algorithms. Weaknesses: I don't see any major weaknesses. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - I think the bound on line 30 is incorrect? there should be a square-root on d and r. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
null
Rebuttal 1: Rebuttal: Thank you to all the reviewers for the helpful comments. Response to Reviewer Zy5R - [Typo in bound on Line 30:] Thanks! Yes, the bound should be $\frac{d}{(\sqrt{d} - \sqrt{r})^2}$, not $\frac{d}{d-r}$ Response to Reviewer BUrC - [Generality of results] The analysis extends beyond matrix factorization but we prefer not to write details as we are in process of writing up follow-up papers. - [Importance of two parts to the initialization] Thanks for this point. Both parts of the initialization are important. See FIGURE 2 in the attached PDF for a comparison of the convergence behavior when both parts are used, compared to when just one or the other part is used. We will add these empirical comparisons to the revised version. Response to Reviewer Ewek - [Assumption that $V^T \Phi_1$ has i.i.d entries] This assumption is correct, since we assume $\Phi_1$ has i.i.d. Gaussian entries and $V$ has orthonormal columns, which implies that $V^T \Phi_1$ has i.i.d. Gaussian entries. - [Comparison of algorithms using larger step-size] This is a great point. Please see FIGURE 1 in the attached PDF, where we consider step-size $1$. Larger step-sizes lead to divergence, so step-size 1 is close to the edge of stability of the algorithms. Observe that while our initialization has a small advantage, the algorithms are all essentially comparable at the edge. This should not be surprising, since at $\eta=1$, $1/\sqrt{\eta} = \sqrt{\eta}$ and the asymmetric component of the initialization is minimal. We will include the comparison for $\eta = 1$ in the revised version and add this commentary. Understanding the theory of gradient descent for matrix factorization at the edge of stability (e.g., for this example, $\eta \approx 1$) is quite challenging, and beyond the scope of this paper. - [Typos] Thanks! we will fix these. - [Intuition for importance of imbalanced scaling] Good point. Our intuition is as follows: with this particular asymmetric initialization, the $X$ updates remain sufficiently small with respect to the scale of the initialization $X_0$, and the $Y$ updates remain sufficiently large with respect to the initial scale $Y_0$, that alternating gradient descent behaves comparably to the \emph{linear} regression problem $\text{minimize}_Y \| A - X_0 Y \|_F^2$ (that is, $X = X_0$ is kept fixed at its initialization). - [Noisy setting, i.e. when $A$ is not exactly low rank] Empirically, yes, alternating gradient descent with our initialization converges to the best low-rank approximation to $A$ when $A$ is not exactly low-rank. The theoretical difficulty (although not impossibility!) is in carefully extending our Lemmas 4.5 and 4.6, which are no longer invariants of the algorithm beyond the exact low-rank setting. Response to Reviewer GCxr - [Significance of initialization] Our initialization is not comparable to $X = A$ and $Y = Id$. The sizes of the factors are completely different. - [Comparison to previous papers] While we agree that the papers $[$YD21$]$ and $[$JCD22$]$ prove convergence from a more generic starting point, our initialization has the same cost as a single step of gradient descent, and thus could potentially lead to an equally cost-effective initialization strategy (for matrix factorization and beyond). In this sense, our initialization is closer to the Gaussian initialization of $[$YD21$]$ and $[$JCD22$]$ than previous lines of work on spectral initializations. - [Optimal convergence rates as identified challenge] This was not an identified challenge in the literature previously. One contribution of our paper is identifying this challenge. - [Behaviors that extend to more sophisticated models] Our theory does extend to more sophisticated models. We prefer not to disclose details as we are in process of writing up follow-up papers. - [logarithmic y-axes] Yes, good point and we can make this change in the revision. - [Proof of Lemma 4.1] You are right; we will include at least a summary of the proof as you suggest in the revision. - [Alternating rather that full gradient descent] Yes exactly, the marginal smoothness from alternating gradient descent allows us to prove a descent lemma, Lemma 4.1, which is the crucial first step in the convergence proof. We did not see how to prove a descent lemma for (non-alternating) gradient descent, but numerics indicate that the two algorithms behave essentially the same. \item[$X_0$ and $Y_0$:] We are using independent Gaussians for $X_0$ and $Y_0$, but the theory also holds if we use the same Gaussian matrix for $X_0$ and $Y_0$ (with $Y_0$ re-scaled). We chose to focus on the case where they are different, to more closely compare to the Gaussian random initialization in $[$YD21$]$ and $[$JCD22$]$. - [small value for $\nu$ in plot] A smaller $\nu$ leads to slightly better concentration of $f_0 = f(X_0, Y_0)$, see Proposition 4.3 part 2. - [Benefit of the proposed initialization at large stepsizes:] Good point. Please see the response to Ewek above, and FIGURE 1 in the attached PDF. Pdf: /pdf/cb2f7a6dd472d1300de27cad598949fcdca3e03b.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
On the Robustness of Removal-Based Feature Attributions
Accept (poster)
Summary: This paper derives robustness results regarding a general class of feature attribution methods referred to as “removal-based feature attributions”, which includes occlusion methods, but also Shapley values and LIME explanations. The authors study Lipschitz-continuity properties of these methods with respect to changes in the modeled function and the inputs. They find that, under Lipschitz assumptions on the classifiers output, the removal-based attributions are also Lipschitz-continuous w.r.t to deviation in the inputs and the model. Strengths: **Generality.** The class of feature attributions studied in this work is kept quite general and it thus covers many relevant implementations such as Occlusion, LIME, and Shapley values. **Technical soundness.** I checked the proofs of some main results and was not able to identify any flaws This makes this work is a rigorous technical contribution in my view. **Good exposition and presentation.** The formalization and notation were introduced well, and the results were good to follow. The proofs that I checked were sufficiently detailed to convince me of the correctness of the theoretical claims. The tables help to get an overview over the various results. **Interesting results on robust summarization techniques.** I particularly like the results on the robustness of the summarization techniques and the statements of the most robust aggregation techniques under each set of axioms (Lemma 8 - Lemma 11). The do not rely on any assumptions and can be operationalized directly, for instance for robust data valuation as done in the recent work mentioned (Wang & Jia, 2023). Weaknesses: **How robust should it be?** While the results are interesting, they are hard to interpret, because there is not optimal robustness. As the authors correctly state, too small robustness can be considered as invariance as in the sense of Adebayo et al. On the contrary, low robustness allows for adversarial attributions. Therefore, the computation of the robustness score allows for no clear interpretation. **Given the formalization done in prior work, the main results are not unexpected.** First, I would like to underline that central parts of the formalization in this work were transferred from Covert et al., 2021. For instance, the distinction between the removal strategy and the summarization technique and the representation of the methods in this scheme was introduced in this work. I think this should be pronounced more clearly. Further, we assume the model to be Lipschitz-continuous (Assumption 1), the probability divergence when changing the set of present features to be Lipschitz as well (Assumption 3), and we consider only attributions that can be represented as linear operator of the model outputs, it is not very surprising that we arrive at Lipschitz-continuity of the attributions w.r.t. to inputs and the model. While I see that intuitive results also need to be rigorously proven, I am uncertain about how many new insights are added. **Implications and Experiments.** The practical implications of the results still remain unclear to some extent. Taking a practitioner’s perspective, I wonder about what novel insights can be gained from this work. The finding that regularized networks have more robust (or less noisy) attributions, has already been established (e.g., by Shah et al, 2021, Dombrowski et al.). On a sidenote, the Lipschitz-continuity does not imply that the method needs to pass the parameter randomization test, as a constant function would still be Lipschitz-continuous. I would be more interested in some constructive way to operationalize the findings, for instance to prevent “adversarial” explanations as hinted in the text? Can they help us construct better techniques? I think these questions are interesting and to this end, to me, it is seems unfortunate that the experiments already conducted and some further experimentation towards this direction did not make it into the main paper. **Minor Points** A line of related work uses removal strategies to benchmark feature attributions (e.g., Tomsett et al., 2020; Rong et al., 2022), which could be discussed as potential a related work. I wonder if the results in this work bear some connection with attribution evaluation problem, which could be a possible application. Table 2 seems not to be referenced in the text and thus appears a bit context-less. -------------------------------------- **Summary.** A rigorously executed paper with general theoretic results on the robustness of attribution methods. However, the practical implications remain a bit unclear and there are no experiments in the main paper. Overall, the paper seems just above the bar of acceptance to me. I would be willing to further increase my score if the authors can convince me of the practical relevance and impact of their results. **References** Ian C Covert, Scott Lundberg, and Su-In Lee. Explaining by removing: A unified framework for model explanation. The Journal of Machine Learning Research, 22(1):9477–9566, 2021. Shah, Harshay, Prateek Jain, and Praneeth Netrapalli. "Do input gradients highlight discriminative features?" Advances in Neural Information Processing Systems 34 (2021): 2046-2059. Tomsett, Richard, et al. "Sanity checks for saliency metrics." Proceedings of the AAAI conference on artificial intelligence. Vol. 34. No. 04. 2020. Rong, Yao, et al. A Consistent and Efficient Evaluation Strategy for Attribution Methods. In International Conference on Machine Learning (pp. 18770-18795). PMLR, 2022 Wang, Jiachen T., and Ruoxi Jia. "Data banzhaf: A robust data valuation framework for machine learning." International Conference on Artificial Intelligence and Statistics. PMLR, 2023. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I would appreciate if the authors could elaborate on the following points: * How can the results be operationalized in practice? Can something constructive be derived from them? * How would authors quantify the degree of robustness that is desirable (e.g., not invariant, not noisy) * Do the results bear any connections to the benchmarking of feature attributions with is frequently done through removals (e.g., Tomsett et al., 2020; Rong et al, 2022)? A discussion of these points in the paper could help to strengthen this work and I would be willing to reconsider my overall assessment based on the authors' replies. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: I think the technical limitations are sufficiently clear, and mostly concern the Lipschitz-continuity assumptions and the bounded domains. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > “Given the formalization done in prior work, the main results are not unexpected.” Regarding the “removal-based explanation” formalization from Covert et al., 2021, we do not intend to frame this as a contribution, hence its exposition in the Background (Section 2.2). Regarding the assumptions required for our analysis and whether the results are surprising, our goal is to understand these methods, so it is not a weakness in our analysis to have identified these necessary conditions. Viewed differently, these assumptions perhaps reflect how difficult it is to achieve robust attributions. To highlight several insights that are worthy and perhaps “surprising” technical contributions, these include: identifying the role of Lipschitz-like continuity in the conditional distribution and its role in making the conditional removal approach less robust; bridging two notions of implementation invariance (weak and strong) through the lens of robustness to model perturbation; and deriving the most robust summary techniques within classes of game-theoretic options. > “The finding that regularized networks have more robust (or less noisy) attributions, has already been established (e.g., by Shah et al., 2021, Dombrowski et al.).” The novelty lies in the different reasons why network regularization works, which are implied by our theoretical analysis, and which should be a topic of sufficient interest for readers. The previous works demonstrating this effect specifically study gradient-based methods [R1, R10], where weight decay plays a role in bounding a network’s Hessian. Here, we use weight decay because of its role in bounding the network’s Lipschitz constant. Furthermore, the role of Lipschitz continuity in our analysis implies that we can use other techniques to improve the robustness: beyond weight decay regularization, we can also use recent techniques for Lipschitz-constrained networks [R11-R13]. To help readers appreciate these points, we will be sure to clarify them in our revised Discussion. > “The Lipschitz continuity does not imply that the method needs to pass the parameter randomization test.” The parameter randomization test is a form of model perturbation, and Lipschitz continuity has no role in our analysis of robustness to model perturbations. > “How can the results be operationalized in practice? Can something constructive be derived from them?” From a practical perspective, our theory suggests (i) how to choose an attribution with strong robustness, and (ii) how to train a model that achieves more robust explanations. Regarding the latter point and specifically robustness to input perturbations, by largely reducing this problem to the model's Lipschitz constant, our work shows how to leverage a range of solutions developed in the robust learning literature, such as regularizing or constraining the network’s Lipschitz constant. There has been an ongoing debate in the literature about whether features should be removed using baseline values/marginal distribution or conditional distribution, as discussed and summarized in Chen et al. [R14]. Our analysis provides constructive guidelines for this choice from a robustness perspective: that is, if input perturbation is a concern, then baseline or marginal removal provides superior robustness. However, if model perturbation is a concern (e.g., “fairwashing” a model’s dependency on features that encode social biases), then conditional removal may be a better choice. > “How would authors quantify the degree of robustness that is desirable (e.g., not invariant, not noisy)?” This is an interesting question that we think is somewhat open-ended in terms of how to address it. The perspective we explored is how to choose an attribution that’s as robust as possible while maintaining a specific sense of being meaningful, and our results in Section 3.3 provide several options for how "meaningful" can be defined (see Definition 2). Briefly, we found that different combinations of constraints on the attribution lead to different optimal choices, for example the Banzhaf or Shapley value, which are both known in the literature. However, we acknowledge that there could be other definitions for “meaningful” explanations, and exploring alternative perspectives on this question is an interesting topic for future work. > “Do the results bear any connections to the benchmarking of feature attributions which is frequently done through removals (e.g. Tomsett et al. 2020; Rong et al., 2022)?” Thank you for raising this question, we are happy to cite several papers from that related line of work. After carefully considering the subject, we do not think there’s an interesting connection between these research directions; instead, we think they are best viewed as distinct, yet important notions of good behavior from an attribution method. And in fact, they may represent competing notions of good behavior, because as we allude to in Section 3.3, perfect robustness can only be achieved by designing an uninformative attribution method (which would necessarily perform badly on any removal/ablation-based metric). Thus, future work on new attribution methods should be sure to account for their performance from the perspective of both robustness and ablation-based metrics. > “Table 2 seems not to be referenced in the text and thus appears a bit context-less.” Thank you for reading our paper carefully. We will include a short comment after Lemmas 5 & 6 to reference Table 2. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for the clarifications. In my remark *"The Lipschitz continuity does not imply that the method needs to pass the parameter randomization test"*, I was referring to line 323-325 in the paper, which seems to suggest that the result in the previous Theorem 2 (robustness of removal-based explanations to model perturbations) already implies that we pass the parameter randomization test. However, as it is an upper bound, we cannot rule out constant attributions even in the case of model perturbation. While this is an interesting finding, I don't think it is a strict consequence of the theoretic analysis. I think if the write-up will be updated to incorporate the above major and minor points, in particular by * including the discussion on applications of the results mentioned here * including at least some experiments on the additional page * clarifying that the formulation of the problem was transferred from Covert et al. (2021) this work will be accessible to the broader audience and a nice contribution, which warrants acceptance at the conference in my opinion. --- Reply to Comment 1.1.1: Comment: Thank you for getting back to us and for considering our work a nice contribution! Regarding the sanity check results, we recognize that the connection between our theory and experiments is not as clear as in the input perturbation case. The connection is that if you read the plots from right to left, we see that decreasing the model perturbation decreases the difference in attribution scores, which is implied by our theory via the bound in Theorem 2. However, your point is well taken, because our analysis does not imply that the difference must be large on the far right side; for this, we would require a lower bound rather than an upper bound. For example, for constant-valued attributions we would have $h(\text{summary}) = 0$, so the upper bound in Theorem 2 is always zero regardless of $||f - f’||$. We will substitute the word “naturally” with “empirically” in line 325 to avoid implying such a connection. In any case, given the significance of these sanity checks and the fact that they are the main benchmark for model perturbations, we thought that our findings were interesting and worth including. We will be sure to incorporate all the highlighted points in our revised Discussion and Experiments sections, as outlined in our responses.
Summary: This paper theoretically analyzes the robustness of removal-based feature attributions against input perturbation, and model perturbation with different summary techniques. Empirical experiments on synthetic datasets support their theoretical analyses such as conditional sampling is more robust to model perturbations compared to baseline or marginal samplings. Experiments on real-world datasets (UCI wine quality and MNIST) give more insights regarding the robustness under different model training settings and comparison to gradient-based explanation methods. Strengths: (1) The proposed theoretical analysis of removal-based feature attributions is technically sound. The input perturbation and model perturbation together with the summary technique cover the different aspects of removal-based attributions. (2) As there are more explanation techniques proposed, this work gives a good starting point to analyze the robustness limitations from a theoretical perspective, which can enable solid explanation evaluation on robustness among different explanation methods. (3) Besides providing theory proofs, empirical experiments also validate and support the findings from theoretical analyses. (4) Messages from the proposed theories are clear. The paper is well written and presented. Weaknesses: (1) The analyses are only limited to several removal-based feature attribution methods. Extending the analyses to other explanation techniques such as gradient-based explanations would make the contribution stronger. In fact, gradient-based explanations are more popularly used. (2) The project is inspiring and gives insights into different removal-based attributions. However, the impact of the analyses can be broader if the authors can propose a technically sound robustness evaluation benchmark based on the theory. (3) To analyze the robustness is computationally costly if the input data is high-dimensional or the model is huge. The author should consider extending the proposed analyses on complex datasets and some experimental results would make the paper stronger. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The overall robustness proposed at the end of Section 4, which combines theorems 1 and 2, seems to be synthesized and is hard to adapt into a realistic scenario. For instance, the input perturbation robustness measures the consistency of explanations generated by one model, while model perturbation robustness analyzes the consistency given different (similar) target models. If use the overall robustness, it does not provide a clear message about different removal-based attributions. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Broader impact of the proposed framework should be discussed. More examples to use proposed robustness would be necessary to make the contribution stronger. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > “Extending the analyses to other explanation techniques such as gradient-based explanations would make the contribution stronger. In fact, gradient-based explanations are more popularly used.” Other works have focused on the robustness of gradient-based methods [R1, R6], but the feature attribution literature lacks comprehensive theoretical analyses on the robustness of removal-based methods, which are also highly popular (e.g., the SHAP GitHub repository has ~19.9k stars, and the LIME GitHub repository has ~10.8k stars). Our paper makes a distinct contribution by addressing this gap. In the Discussion, we draw connections between the robustness of removal- and gradient-based methods. Primarily, the robustness of removal-based methods under input perturbations is related to a model’s Lipschitz continuity, whereas robustness of gradient-based methods is often related to the Lipschitzness of model gradient (i.e., Lipschitz smoothness). It would be interesting to pursue a unified, theoretical analysis of the robustness of both classes of methods, but we leave this topic to future work. > “The impact of the analyses can be broader if the authors can propose a technically sound robustness evaluation benchmark based on the theory.” The goal of a robustness benchmark is to provide an objective comparison between attribution methods, perhaps in the context of a specific model and dataset, but ideally with more generalizable conclusions. Several existing works address this topic empirically, and our work intends to provide a complementary theoretical perspective. We represent the essential properties of the model and dataset through our assumptions (namely $L$, $B$ and $M$), and this allows us to compare removal-based explanations in a more generalized sense by focusing on their implementation choices. That said, there is also a clear connection between our analysis and possible empirical benchmarks, and this can be seen in our experiments. For robustness to input perturbations, we verify that our theoretical findings are demonstrated in practice (Theorem 1) by testing the stability of attributions when adding noise to the input (e.g., Figure 4). These experiments resemble the empirical analysis conducted by Ghorbani et al. for gradient-based methods [R7], and they are also similar to metrics used in an existing benchmark library [R8]. In other words, our theory provides a way to characterize properties that other works study empirically. Regarding model perturbation, two types of benchmarks could include (i) making small perturbations to verify that the attributions don’t change much, and (ii) making large perturbations to verify that the attributions change significantly. In terms of existing works, Anders et al. explore the first direction [R6], and Adebayo et al. explore the latter [R5]. We incorporated the sanity checks from Adebayo et al. into our experiments (Figures 5-6), and Figure 2 includes an experiment with logistic regression similar to (i). There is certainly room to create new benchmarks along these lines, but a difficult design choice is how to define the small perturbations. Giving this choice proper consideration and generating results across a wide class of methods is best left as a topic for future work. Again, it is perhaps best to view our work as studying theoretically what other works have studied empirically, including not only [R5] and [R6] but also Slack et al. [R9]. > “To analyze the robustness is computationally costly if the input data is high-dimensional or the model is huge. The author should consider extending the proposed analyses on complex datasets and some experimental results would make the paper stronger.” We thank the reviewer for this suggestion to improve the impact of our paper. We ran additional experiments with (i) ResNet-18 trained on CIFAR-10, and (ii) ResNet-50 trained on 10 classes from ImageNet (i.e., Imagenette). The figures are included in the attached file in the “global” response. We again observe empirical results consistent with our theoretical insights. In particular, ResNet-18 and ResNet-50 trained with stronger weight decay have more robust attributions under input perturbations. Also, under cascading model randomization (a form of increasing model perturbation) [R5], the unperturbed and perturbed ResNets indeed have increasingly dissimilar attributions (and as in our previous experiments, more so for removal-based methods than gradient-based ones). > “The overall robustness proposed at the end of Section 4, which combines theorems 1 and 2, seems to be synthesized and is hard to adapt into a realistic scenario.” The overall robustness in Corollary 1 is included for the sake of completeness and to show that our theory can account for this case. The result can be useful in situations where we want to interpret a model $f$ on a sample $x$, but the system is subject to simultaneous perturbations, such that we only have access to a perturbed model $f’$ and perturbed sample $x’$ (e.g., hiding a black-box model’s undesirable dependency on certain features with simultaneous, small model and input perturbations). However, we agree that such simultaneous perturbations may be uncommon in practice, so Corollary 1 can be deferred to the Appendix (thus making more room for our experiments). --- Rebuttal Comment 1.1: Comment: Thank you for your response and clarification of my concerns. I think one future direction proposed by the authors, "It would be interesting to pursue a unified, theoretical analysis of the robustness of both classes of methods," would make a valuable contribution to the community. I raised my score to reflect my appreciation for the authors' efforts.
Summary: This paper studies the robustness properties of an explanation to small perturbations in the input space (i.e. like an adversarial example) and also to the model parameters. The authors use a number of Lipschitz-style bounds to then derive overall limits on how much explanations can change. Update: As the reviewers have addressed the main concern on the Lipschitz constant, I have no major reason to reject this paper. Strengths: The bounds are principled and rigorous for explanations, as opposed to heuristic. The paper considers multiple removal techniques and summary statistics to show how the bounds change based on the methods. These bounds can sometimes lead to significant asymptotic differences. Weaknesses: The elephant in the room is that the results all need some kind of Lipschitz or Lipschitz-like bound, and that the final bound depends on the constant. This constant could be significant and not necessarily ignored, but there does not seem to be any evidence that the actual Lipschitz constant is at all close to being small enough to be useful. There are barely any experimental results in the main paper. Almost the entirety is deferred to the appendix. The results seem to focus mainly on synthetic settings and linear settings where Lipschitz constants and other constants can be directly computed. They also focus on fairly simple datasets such as UCI wine and MNIST. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can you measure or estimate the Lipschitz constants and show that these are meaningfully non-vacuous assumptions? It would be better to highlight some kind of experimental result in the main paper. Is there a reason or limiting factor preventing the work from applying to higher dimensional work such as text or image settings? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations discuss some conservativeness of the bounds, which could be addressed with work on certified robustness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > “The elephant in the room is that the results all need some kind of Lipschitz or Lipschitz-like bound […] there does not seem to be any evidence that the actual Lipschitz constant is at all close to being small enough to be useful.” First, it’s worth emphasizing that our work aims to understand these attribution methods, not necessarily advocate for them, so it’s not a weakness that our analysis highlights the dependence on a quantity that is difficult to measure. In settings where $L$ can be large, our theory explains their lack of robustness; however, more positively, it also highlights the opportunity to ensure robustness via methods for controlling $L$, such as Lipschitz-constrained networks. Ultimately, this doesn't seem like a reason to reject our work, as we have correctly characterized this family of attribution methods. Note also that earlier works on gradient-based methods like Dombrowski et al. [R1] are based on properties of the network’s Hessian, which are more difficult to measure than the Lipschitz constant. Nonetheless, we can also provide a brief description of the current literature on estimating $L$. Exact computation of the Lipschitz constant, even for a two-layer neural network, is NP-hard [R2]. Even for simple networks on MNIST, current methods for estimating their Lipschitz constants give large estimates [R2-R4], which can be conservative and obscure meaningful conclusions with our theoretical results. That said, our theoretical analysis suggests empirical techniques for improving the robustness (e.g., increasing weight decay to shrink an upper bound on the Lipschitz constant), and these show the intended effect in our experiments. Therefore, our theoretical results regarding Lipschitz continuity are indeed useful in practice. > “It would be better to highlight some kind of experimental result in the main paper.” We thank the reviewer for this suggestion to improve our paper. To address this suggestion, we propose to move Corollary 1 to the Appendix to make room for a longer discussion of the experiments. Also, with the extra page allowed in the camera-ready version, we can move Figures 4 & 5 to the main text to highlight the empirical implications of our theory. We believe that a focus on theoretical insights and a concise description of experiments in the main text will allow the average reader to best understand the contributions and findings from our paper. > “Is there a reason or limiting factor preventing the work from applying to higher dimensional work such as text or image settings?” We thank the reviewer for this suggestion to improve the impact of our paper. To address this point, we ran additional experiments with (i) ResNet-18 trained on CIFAR-10, and (ii) ResNet-50 trained on 10 classes from ImageNet (i.e., Imagenette). The figures are included in the attached file in the “global” response. We again observe empirical results consistent with our theoretical insights. In particular, ResNet-18 and ResNet-50 trained with stronger weight decay have more robust attributions under input perturbations. Also, under cascading model randomization (a form of increasing model perturbation) [R5], the unperturbed and perturbed ResNets indeed have increasingly dissimilar attributions (and as in our previous experiments, more so for removal-based methods than gradient-based ones). --- Rebuttal Comment 1.1: Comment: Thanks for your reply! 1. I think you may have missed the point about the Lipschitz constant---I understand that it plays a role in the theoretical results, and that it is hard to estimate. I am just looking for evidence that this assumption does not make the results vacuous, especially since networks can have notoriously large Lipschitz constants. It is not necessary to describe the current literature on estimating Lipschitz constants. For example, what you suggested is a great example of what would fit the bill here---if a Lipschitz-constrained network does in fact improve the robustness, then this implies that the constant can in fact be small enough to have non-vacuous implications in practice. 2. You also mention using weight decay to shrink an upper bound on the Lipschitz constant---can you expand more on this? I did not see much about this in the main paper. As weight decay has been used before to improve robustness of the model, it makes sense that this could also help robustness of the attribution, but is there a deeper theoretical link here? --- Reply to Comment 1.1.1: Comment: Thank you for getting back to us! We answer your second question first, because it can help clarify the first question. - There is indeed a theoretical link between weight decay and a network’s Lipschitz constant. Weight decay shrinks an affine layer’s Frobenius norm, which upper bounds the layer’s spectral norm; and the product of all layers’ spectral norms upper bounds the network’s Lipschitz constant [R11]. We discussed this in the Experiments section in our Appendix, and in our revision we will be sure to highlight it in the main text. - Like you said, if our theoretical results were vacuous in practice, then reducing the Lipschitz constant of a network (or in practice some proxy to the Lipschitz constant) would not improve attribution robustness. However, we see that regularizing the Frobenius norm of a network, which upper bounds the network Lipschitz constant as discussed above, can indeed empirically improve attribution robustness (see Figure 4 in the Appendix and Figures R1-R2 in our rebuttal pdf). Hence, our theoretical results do bear practical implications. - On the other hand, we do recognize that some networks can potentially have very large Lipschitz constants (assuming we can somehow accurately approximate those Lipschitz constants). In such a situation, our theoretical results suggest that those networks should be avoided because the guarantees for their attribution robustness are not useful. In fact, those networks should also be avoided because the worst-case guarantees for their robustness against general adversarial attacks are not useful neither. This is why there are methods designed for improving the Lipschitz regularity of neural networks [R11-R13].
Summary: Previous research has primarily focused on the robustness of gradient-based feature attributions, but the robustness properties of removal-based attribution methods are not well understood. To fill this gap, the authors of the paragraph aim to theoretically analyze and characterize the robustness of removal-based feature attributions. They provide a unified analysis of such methods and prove upper bounds for the difference between intact and perturbed attributions under various settings of input and model perturbation. The authors validate their theoretical findings through empirical experiments on synthetic and real-world data, showcasing their practical implications. Strengths: 1. This paper addresses a crucial XAI problem: Interpretation robustness concerning both input and model perturbations. The study may benefit the derivation of the theoretical impact of some important tricks in XAI, such as baseline value selection and marginal distribution approximation. 2. The derivation of robustness considers input perturbation and model perturbation comprehensively. 3. Section 4 provides valuable insights, revealing that interpretation robustness relies on both the Lipschitz continuity constant and feature number for input perturbation, while it depends on the ∞-norm and feature number in the case of model perturbation. 4. The experiment observations in Section 5 are also enlightening, highlighting the advantages of removal-based attribution over gradient-based attribution. Weaknesses: The paper should be re-arranged by including experiment results in the main content and moving some theoretical results to the appendix. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the Weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations of this work in the last section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > “The paper should be re-arranged by including experiment results in the main content and moving some theoretical results to the appendix.” We thank the reviewer for this suggestion to improve our paper. To address this suggestion, we will move Corollary 1 to the Appendix to make room for a longer discussion of the experiments. Also, with the extra page allowed in the camera-ready version, we can move Figures 4 & 5 to the main text to highlight the empirical implications of our theory. While we agree that it would be ideal to include our full experiments section in the main text, we believe that a focus on theoretical insights along a concise description of experiments will allow the average reader to best understand the findings from our work.
Rebuttal 1: Rebuttal: We thank all the reviewers for their insightful and constructive comments, which have helped us further improve our paper. The specific issues raised by each reviewer are addressed in the individual responses below. We hope you will consider raising the scores if we have adequately addressed your comments. References relevant to all responses are included below. References [R1] Towards robust explanations for deep neural networks - Dombrowski et al. 2022 [R2] Lipschitz regularity of deep neural networks: analysis and efficient estimation - Scaman and Virmaux 2018 [R3] Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks - Fazlyab et al. 2019 [R4] Efficiently Computing Local Lipschitz Constants of Neural Networks via Bound Propagation - Shi et al. 2022 [R5] Sanity Checks for Saliency Maps - Adebayo et al. 2018 [R6] Fairwashing Explanations with Off-Manifold Detergent - Anders et al. 2020 [R7] Interpretation of Neural Networks is Fragile - Ghorbani and Abid et al. 2019 [R8] OpenXAI: Towards a Transparent Evaluation of Post hoc Model Explanations - Agarwal et al. 2022 [R9] Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods - Slack and Hilgard et al. 2020 [R10] Do Input Gradients Highlight Discriminative Features? - Shah et al. 2021 [R11] Regularisation of Neural Networks by Enforcing Lipschitz Continuity - Gouk et al. 2018 [R12] The Singular Values of Convolutional Layers - Sedghi et al. 2018 [R13] Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks - Tsuzuku et al. 2018 [R14] True to the Model or True to the Data? - Chen and Janizek et al. 2020 Pdf: /pdf/c06c4e652fd1dde6586764e83729a4bfc1df2af1.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Heavy-Tailed Algebra for Probabilistic Programming
Accept (poster)
Summary: The paper proposes a static analysis technique for probabilistic programming languages, which annotates random variables with metadata characterizing their tail behavior. In particular, generalized Gamma distributions are used for this purpose. It is shown that they are closed under a number of operations, including addition, multiplication, and, under some conditions, reciprocals. This gives rise to an algebra which is used to statically infer the tail behavior of random variables throughout the probabilistic program, including posterior distributions. At runtime, the random variables are then estimated by neural splice flows initialized to the inferred Gamma distributions, which guarantees the correct tail behavior while allowing flexibility for the bulk of the probability mass. It is shown that this approach yields the correct behavior in many instances in which conventional methods fail. Strengths: - The paper addresses an important and interesting problem in probabilistic programming. - The static analysis approach to analyzing tail behavior is novel to my knowledge. - The chosen approach is competently executed: The provided algebra covers a wide variety of distribution types and operations, many of which utilizing newly derived theoretical results. - Combining the inferred Gamma distribution parameters with neural splice flows is a clever way to maintain the correct tail behavior while allowing flexible estimation of the distribution's bulk. - Encouraging experimental results are provided for a number of tasks, including density estimation, variational inference, and Bayesian linear regression. - The paper is well written and easy to follow. Detailed derivations for the new theoretical results are provided in the appendix. Weaknesses: - The models used for the experimental results are somewhat small. While simple monovariate distributions are fine to get a sense for the behavior of the method, it would be good to include some larger programs to demonstrate the robustness of the method. - The paper mentions a number of limitations of the approach: It does not cover log-normal tails, operations between dependent variables, or dynamic model structures. Given these limitations, it would be especially useful to highlight some set of more complex models used in practice to which the method is nevertheless applicable. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - How does the system behave when the assumptions discussed above are not fully met, e.g., when there is an operation between dependent variables? Is it possible to make use of partial results for a given program? - The discussion on posterior distributions mostly focuses on two random variable connected by a chain of operations, but notes that other cases may also be covered. Is it really the case that any posterior distribution may be estimated this straightforwardly? Can the system as implemented cover complicated multivariate computation graphs? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The limitations have been well addressed, though their practical implications could be stated more directly (as discussed above). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive assessment of our work! > How does the system behave when the assumptions discussed above are not fully met, e.g., when there is an operation between dependent variables? Is it possible to make use of partial results for a given program? > The discussion on posterior distributions mostly focuses on two random variable connected by a chain of operations, but notes that other cases may also be covered. Is it really the case that any posterior distribution may be estimated this straightforwardly? Can the system as implemented cover complicated multivariate computation graphs? This relates to questions raised by Reviewer 9rGq's --- in a nutshell, handling dependencies accurately will require symbolic treatment. You are right that not all posteriors are covered, and it is actually NP-hard (line 274) to cover all possible posteriors. While the GGA is capable of analyzing some complicated multivaraite computation graphs, it is currently limited to computation graphs satisfying (or as the discussion around __independence assumptions__ shows, can be equivalently rewritten to satisfy) the independence assumption for the input operands of every GGA operation. Generally speaking, to exactly determine tail behavior at scale is likely to require further advancements on top of what we propose. However, we do agree that there is still partial utility when results are violated and will include this discussion in the revision. In particular, since the GGA is quite simple, there is nothing preventing its usage at scale to provide an estimate of the tail behavior. To our knowledge, there is no alternative in the literature, and the GGA may reveal information about the target density (e.g. to diagnose issues) that would otherwise be opaque. > ...the method's evaluation is limited to unimodal distributions. The density estimation experiments have been done on standard distributions. I am unsure about whether this is a limitation of the evaluation or a limitation of the proposed method. We believe evaluation against a multi-modal mixture target is reasonable provided the mixture is given explicitly (e.g. as a list of component random variables and their mixture probabilities) so that a PPL compiler can analyze it statically. Due to space constraints, we did not include an operation for a mixture of densities in the GGA although such a result is possible within the GGA. Indeed, denoting a mixture of densities by $\cup$, this operation would be added to Table 1 in the form $$ (\nu_1,\sigma_1,\rho_1) \cup (\nu_2,\sigma_2,\rho_2) \equiv \max\{(\nu_1,\sigma_1,\rho_1),(\nu_2,\sigma_2,\rho_2)\}. $$ At this point, the framework is capable of naturally handling multi-modality through its normalizing flow bulk adjustment. For example, (Wehenkel \& Louppe, 2019) or (Durkan et al., 2019) both provide multi-modal density approximations and could be used as the normalizing flow. Since multimodality is a bulk property (e.g. a Gaussian mixture with finitely many components continues to have Gaussian tails), it does not affect tail asymptotics and the GGA remains applicable. With this in mind, multimodality is not really an assessment of the GGA, and so we focussed our attention on unimodal distributions with clearer comparisons. We are happy to include a multimodal example in the revised version, however. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for your detailed response. I appreciate the clarifications, and am happy to keep my score.
Summary: The paper develops an algebra which acts on a three-parameter family of tail asymptotics based on the generalized Gamma distribution. The algebraic operations are closed under addition, multiplication, powers, and a full list is given in Table 1. With this algebra, tail calculation can be done automatically in probabilistic programming instead of analytically. The paper also proposed a method of splicing bulk and calibrated tails with experimental verification. Strengths: 1. The paper develops a heavy-tailed algebra for probabilistic programming and also proposed a method of splicing bulk and calibrated tails with experimental verification. 2. Example 4 in the appendix had a calculation for the product of two random variables. Weaknesses: I did not notice any glaring weaknesses in the paper but I am also not a subject matter expert on probabilistic programming. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In the case of multi-dimensional random variables and dependent tails, would you consider using an extreme-value copula? 2. In the case of dependent tails and bulk to tail splicing/ calibration/ extrapolation, would you consider using an Archimax copula? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations were clearly defined and addressed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reading and reviewing our paper. Since our work only focuses on univariate tails, we agree that copulas provide a promising direction which can be combined with other work applying normalizing flows to multi-variate heavy tails (ATAF paper) in order to improve multivariate heavy-tailed approximation. The choice of copula is indeed important here, so the extension will require further work. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I have read all the other reviews and responses and increased the score to 7.
Summary: During inference, we are often interested in the behavior of the tails of the distributions we are analyzing. Heavy or light tails may necessitate switching algorithms so that inference remains stable for example. This paper describes a calculus by which a probabilistic programming language may calculate the tails of distribution objects derived from operations performed on elemental distributions. They apply their method to perform some simple probabilistic programming applications. Strengths: The authors apply their method to a huge number of distributions and their transformations. They summarize their rules helpfully in charts. It seems as though their method may also simply be added to other probabilistic programming languages. I would certainly appreciate being able to call the tail parameters of a distribution object in Pyro for example. Weaknesses: The authors in section 3 suggest a method for fitting a density that consists of fitting its tail using their calculus and then fitting the bulk using a flow model. This is a reasonable choice for their simple experiments and there are a number of other choices they could have made to fit the bulk. The exposition of the paper sets it up in figure 1 and section 3 as essential to their probabilistic programming method, but I don't believe that to be that case. It would clarify and strengthen the paper if authors clarified the relation of their flow method for estimating densities to the rest of their contributions. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: In line 134 you describe the representative distribution of a distribution with a tail with a small $\rho$. You motivate your construction by matching moments, but could you describe more rigorously why this is a reasonable I also wonder why the authors don't consider incorporating discrete distributions such as Poisson and Geometric distributions, which can be treated as continuous in their setting, say by adding a uniform random variable, so a geometric would have a $(0, 0, \log(p))$ tail for example. There are often cases where discrete data is treated as continuous, such as for RNAseq counts. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The authors describe limitations in section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive assessment of our work! Yes, you are correct that the choice of spliced flow is non-essential. Our intention was to propose one such construction which was sufficiently flexible to capture the bulk while also respecting tail asymptotics computed by the GGA and we achieved this by proving Theorem 2 and operationalizing it in Section 3. Alternative constructions which preserve tail asymptotics may also perform well and we will clarify our contributions section to reflect this in the revision. > In line 134 you describe the representative distribution of a distribution with a tail with a small . You motivate your construction by matching moments, but could you describe more rigorously why this is a reasonable It is known that very heavy subexponential distributions (small $\rho$)---which often come from repeated multiplications---closely resemble a power law in practice, as shown in line 522. If we do not approximate this case by a power law, the representative generalized Gamma distribution often provides a very poor approximation to the bulk, inhibiting the final stage of bulk correction. The moment-matching approach is loosely derived from ideas in implicit renewal theory (Buraczewski, Dariusz, Damek, Mikosch 2016, which we are happy to cite in the revised version) and we found it to perform the best in practice. __Incorporating discretes__: Thank you for the suggestion! Yes, as you have already noted with the geometric distribution (although the class would be $(0,\log p, 1)$), there is nothing to prohibit the use of the GGA for operating on discrete distributions by treating them as continuous through a suitable convolution as suggested. --- Rebuttal Comment 1.1: Title: Response to authors Comment: I thank the authors for their response, they have answered my questions.
Summary: The paper addresses the problem of density estimation of probabilistic models (expressed as probabilistic programs), with a focus on their tails. This is important for several Bayesian inference methods: importance sampling can exhibit infinite variance if the proposal has a lighter tail than the target, many black box variational inference methods cannot change their tail behavior and MCMC algorithms may lose ergodicity if the tail of the target is outside a particular class. This paper proposes a static analysis pass on probabilistic programs consisting of sampling from primitive distributions and arithmetic operations on the sampled variables (addition, multiplication, division, logarithm, exponential, and Lipschitz functions). The analysis is based on generalized Gamma distributions, which capture many tail behaviors of primitive distributions (e.g. Gaussian, Gamma, Weibull, Student-t, Pareto, but not the log-normal) and are closed under the above operations on random variables (or can be bounded if not represented exactly). As a consequence, a compiler pass can compute the parameters of a generalized Gamma distribution with the same (or at least providing a bound on the) tail behavior as the posterior distribution of the probabilistic program. Since the bulk of this distribution may be very different from the posterior, the paper combines it with neural spline flows: effectively, this computes the pushforward distribution under a Lipschitz function, resulting in a better bulk approximation while preserving the tails. The experimental evaluation demonstrates that this approach to density estimation usually improves the tail estimation compared to status-quo initializations with a normal or Cauchy distribution, both for density estimation using normalizing flows and variational inference, where the target distributions are simple Cauchy, Inverse Gamma, Student-t, χ^2 and normal distributions. Similarly, the density estimation of the variance parameter of a Bayesian linear regression model is improved compared to the standard Gaussian approach. Finally, they observe that their method (without normalizing flows) provides a very close fit for the density estimation of the iterates of stochastic gradient descent applied to a least-squares linear regression. Strengths: The paper's main idea of using a static analysis to compute the tails of the posterior distribution of a probabilistic program is very interesting because tails are difficult to estimate. Such an ahead-of-time analysis with correctness guarantees is therefore very useful. In order to arrive at this result, the paper proves several new theoretical results on generalized Gamma distributions, which may be of independent interest. The experimental results show that in combination with normalizing flows, this often performs better than existing methods that assume a Gaussian base distribution. The presentation of the paper is generally clear and readable. Overall, this paper has an innovative idea with promising results, which are of interest to the probabilistic programming community. Weaknesses: While the paper's idea is very interesting, I have a few concerns about the soundness and presentation. I will update my rating if the author's answers address my concerns. Guarantees and assumptions: What exactly are the guarantees (mentioned in line 149) and assumptions of the static analysis? It would be good if this could be stated in one place, ideally as a self-contained theorem. Table 1 mentions that "additional assumptions are required" for several operations, but I was not able to find them in the paper. The assumption that the operands are independent is also very important and only really mentioned at the very end (line 270). It is also unclear what exactly the guarantees are for programs involving reciprocals, exponentials, and logarithms because the generalized Gamma distributions are not closed under these operations. In this case, will the computed tail be an upper or a lower bound? What happens if such operations are applied repeatedly? While there are theorems and proofs for each individual operation, I was not able to find a theorem clarifying the assumptions and guarantees about the composition of such operations, especially if the resulting tail cannot be represented exactly in the GGA. Bulk correction: Section 3.3 (the bulk correction via normalizing flows) is informal and light on details. I assume the neural spine flows preserve the tails because they are 1-Lipschitz maps? This is not obvious to me, and I think it would be good if the paper could elaborate on why the normalizing flows don't affect the tails. Again, putting the statement in the form of a theorem with all necessary assumptions would be helpful. Probabilistic programming language: The grammar of the probabilistic programming language is only introduced by example, there is no comprehensive description, not even in the appendix. Furthermore, the name "programming language" sounds like there is support for control structure (branches, loops), which doesn't seem to be the case in this approach. Rather, the "programming language" seems to be a simple expression language with the grammar `E ::= sample(D) | op(E, ..., E)` where `E` is an expression, `op` is a supported operation, and `D` is a supported primitive distribution. It would be good if this could be clarified in the paper. Independence assumption: a main assumption is that the operands to all operations are independent random variables. This seems to be difficult to guarantee in practice because posteriors of parameters are often correlated in probabilistic models. Does this mean that (for now), all operands of all operations need to be hand-checked for independence? Does this severely restrict the class of probabilistic models that can be handled by this approach? If so, this should be discussed. It is only briefly mentioned in line 270. The following points are weaknesses, but less important than the previous points: Experimental evaluation: the experiments demonstrate benefits for very simple benchmarks (one-dimensional primitive distributions, one-dimensional Bayesian linear regression). If more realistic experiments (e.g. higher-dimensional examples) could be performed, this would strengthen the paper. However, I don't consider this a condition for publication because the results are already interesting. Abstract interpretation: the static analysis pass, as I understand it, performs abstract interpretation (a common technique in static analysis) where the abstract domain is given by the generalized Gamma algebra. It would be good to mention this keyword, as it helps put this method into context. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Line 92 claims that closure under some operations is known. Could you provide references for this? - What exactly are the guarantees and assumptions for operations under which GGA is not closed? What happens if such operations are applied repeatedly? (cf. Weaknesses) - The operations in Table 1 only apply to equivalence classes $(\nu, \sigma, \rho)$. What happens if the result of an operation is $\mathcal L$ or $\mathcal R_1$? Will those be treated as $(?, ?, \infty)$ or $(-1, ?, 0)$ in subsequent operations? If so, what values are assumed for the question marks? Why don't the infinities lead to problems? - Section 3.3: could you elaborate on why the normalizing flows don't affect the tails (cf. Weaknesses)? - In Table 4, GGA Flow does not seem to work for IG and StudentT. Do you know what the reason for this is? It seems unexpected given all the claimed benefits. Typos: - Equation (1): should be $= cx^\nu ...$ or $\sim x^\nu ...$ - line 111: missing indices $f_{ij}$, $R_{ij}$. - line 131: shouldn't it say $\rho \le 0$ instead of $\rho \le -1$? - line 201: it is unclear where the "conditional on" clause ends, please rephrase Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: Limitations are discussed in the Conclusion. I believe a few items should be added to this list: - no control structure in the programming language (it is an expression language, cf. Weaknesses) - tails are not exact (only bounds) if certain operations are involved Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to provide a detailed review and giving us the opportunity to address your concerns. We will discuss each point not addressed in our overall author response sequentially. ## Weaknesses **Guarantees and assumptions**: Thank you for the excellent suggestion! To improve readability of Appendix A in the revision, we will book-end the results with the following main theorem. **Main Theorem**: Let $X_1,\dots,X_n$ be independent random variables with generalized Gamma tails. Then - For any $i \neq j$, $c \neq 0$ and $\beta > 0$, $X_i + X_j$, $X_i \times X_j$, $c X_i$, and $X_i^\beta$ also exhibit generalized Gamma tails as detailed in Table 1 (addition, multiplication, scalar multiplication, powers). - If $X_i$ and $X_j$ have densities $p_i$ and $p_j$, then the random variable with density proportional to $p_i p_j$ has generalized Gamma tails as in Table 1 (product of densities). - If $X_i$ is generalized Gamma distributed, then $\frac{1}{X_i}$ has _exactly_ the generalized Gamma distribution described in Table 1 (reciprocals). - If $X_i$ has density $p$ continuous at zero with $p(0) > 0$, then $\frac{1}{X_i}$ also has generalized Gamma tails described in Table 1 (reciprocals). - For any Lipschitz function $f$, the tail of $f(X_1,\dots,X_n)$ is no heavier than the generalized Gamma tail detailed in Table 1 (Lipschitz functions). If $f$ is asymptotically linear then $f(X_1,\dots,X_n)$ has _exactly_ the described tail. - $\log X_i$ and $e^{X_i}$ have tails no heavier than the generalized Gamma tails in Table 1 (exponentials, logarithms). If $X_i$ is regularly varying, then $\log X_i$ has _exactly_ the described tail. If $X_i$ is exponentially distributed, then $\exp X_i$ has _exactly_ the described tail. Hopefully this clears up any confusion around the theoretical results. **Repeated application**: Provided independence holds, composition of operations in the GGA remain consistent unless one applies Lipschitz functions, logarithms, or exponentials. If one of these operations is applied, the tail becomes an upper bound, which remains consistent under addition, multiplication, and powers, but not reciprocals. Given that we are working with a fixed class of tails, such behavior is inevitable, and it is possible to perform a sequence of operations for which the tail no longer becomes accurate. Nevertheless, we have endeavored to create a system to balance theoretical consistency with practical value, and find that in most cases, the GGA algebra produces a tail that is either exact, or a very good estimate (the SGD example was particularly convincing to us). **Bulk correction**: Yes, neural spline flows are 1-Lipschitz linear functions in the tail. We implied this by writing "identity functions outside of a bounded interval" (line 147) but will revise to be more explicit for clarity. **Probabilistic programming languages**: We would like to point out the distinction between the host PPL and the symbolic expressions where the GGA is applicable. The GGA itself is not a PPL, but rather an ahead-of-time analysis method compatible with a PPL. Our host PPL (beanmachine) is based in Python and capable of branching and looping. At the end of a single execution trace, the expressions constituting a target random variable is analyzed by the GGA to determine its tail parameters. Perhaps more interesting, and indeed outside of the scope of the GGA we have developed, is the ability to directly analyze control flow structures. However, we note that analysis of a mixture random variable generated by branching control flows requires marginalization where even approximate inference is known to be NP-hard (line 274 and Koller and Friedman, 2009). Additional assumptions such as junction-tree structure or restricted distribution families may permit further analysis and we leave this for future work. **Experimental evaluation**: We have restricted our experiments to those where we know the solution exactly, as it is difficult to provide a good frame of reference otherwise. **Abstract interpretation**: Thank you for this keyword, indeed, this describes our approach very well. We will include it in the revised version. ## Questions Responses to questions not already addressed by the "Weaknesses" section: - Line 92 references --- These results are cited in Appendix A, but we are happy to include these citations here as well. - Table 1 $\mathcal{L}_1$ or $\mathcal{R}_1$ results --- Again, we are taking some liberties to encode distributions that do not fit into the generalized Gamma tail framework. $\mathcal{L}$ is treated as $(0,1,\infty)$ and $\mathcal{R}_1$ is treated as $(-1,1,0)$. Any operation which results in a super-light tail ($\rho = \infty$) becomes $\mathcal{L}$ and any operation which results in a super-heavy tail ($\rho \leq 0$ and $\nu \geq -1$) becomes $\mathcal{R}_1$. Each of the operations are consistent under these definitions, where $1/\infty$ is treated as $0$. One deficiency here that we recently became aware of is that multiplication with a constant is not always equal to multiplication by $\mathcal{L}$ --- this is the one operation where multiplication by $\mathcal{L}$ can give different answers depending on how $\nu,\sigma$ is defined. This is unavoidable, but our procedure still offers an excellent heuristic even in this setting. In any case, we will include this discussion in the revised version. - Table 4 IG and StudentT --- The Pareto $\hat{k}$ diagnostic (Yao, Vehtari, Simpson, Gelman, 2018) is based on a heuristic argument and as a test for divergence is known to yield false positives, which may be the case in the two examples mentioned. --- Rebuttal Comment 1.1: Comment: Thank you for the thorough response. Regarding the point about the guarantees & assumptions as a self-contained theorem, I wasn't looking for a theorem describing Table 1, but a theorem about the tails of a whole probabilistic program, not just individual operations. I don't mind the individual operations described in a table (as long as the assumptions are written down!). My concern was mostly about the composition of operations and the guarantees that still hold for such compositions. The paragraph "repeated applications" in your rebuttal is what I was looking for. Such a discussion should definitely be included in the paper. It would be even better if this could be turned into a theorem saying under what circumstances (i.e. which operations are allowed to occur in the program) and assumptions (e.g. independence), the computed tail of a *whole* probabilistic program (not just individual operations) is exact, an upper bound, a lower bound, or just an approximation. For the most part, my concerns regarding the theoretical soundness have been alleviated though. Thank you for your other clarifications; they make sense to me. Overall, I have updated my score from 5 to 7.
Rebuttal 1: Rebuttal: We thank the reviewers for their time and valuable feedback. We appreciate that reviewers have recognized the novelty of our approach and its applications in ahead-of-time static analysis of probabilistic programming languages (PPL). Suggested minor changes and fixes have been incorporated into the working document. Below we respond to points shared by more than one reviewer and provide more specific responses to individual reviewers as comments. **Independence assumption**: Independence of operands is evidently key to the algebra functioning correctly, as for example, the tail of $X^2$ differs from the tail of $X Y$ even if $X$ and $Y$ share the same tail. We will include a few words in Section 2 to be clear about this. To deal with arbitrary dependencies that arise due to composition requires a combination of symbolic methods with the GGA. For example, if $Y = X_1 + X_2$ and $Z = X_2 + X_3$, then even if $X_i$ are independent the GGA cannot be applied as-is to $Y + Z$. However, the equivalent symbolic representation $Y + Z = X_1 + 2 X_2 + X_3$ can be dealt with using the GGA. This is discussed in the conclusion (line 270), but we can include further discussion on this matter in the revision.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Co-Learning Empirical Games and World Models
Reject
Summary: The paper addresses the problem of multi-agent RL by using learned world models over multiple potential opponent policies. The technique uses a Dyna-style algorithm to train the core policy with a combination of experiences generated through a world model and experiences playing against opponents. One evaluation demonstrates the world models benefits from training on data from multiple distinct policies. A second evaluation compares ways to use experiences generated from a world model to train a policy, showing pretraining on purely generated experiences is effective to warm-start a policy. An ablation study compares the proposed Dyna-PSRO model to vanilla PSRO on three MARL games, showing improvements over the PSRO model. Strengths: ## originality Modest. The paper extends existing lines of work on MARL and world models, specifically studying the question of the policy diversity for training the world model. ## quality Modest. The core results (figure 5) show clear improvements over PSRO. This is limited to a small number of games and the games themselves are relatively simple game domains. The paper does a good job of breaking down specific claims to isolated experiments. ## clarity Low. It was difficult to interpret many of the figures (questions and suggestions below). Generally the results of each experiment were hard to understand and would benefit from a single clear statement of the core outcomes in each section. ## significance Low. The core audience of this work is researchers in MARL and particularly those considering world models as a solution. Weaknesses: The experimental results are promising, but would benefit from expansion. There are a few experiments that would help: 1. More games from MeltingPot. I hate asking simply for "more", but in this case it would help to show how well the agents perform on a wide variety of tasks. The results would help clarify where DynaPSRO benefits and may reveal limitations or areas for improvement. The wider set of results would give others confidence in the generality of the improvements gained by planning against diverse other agents. 1. More complex games. Consider more complex environments from PettingZoo (https://pettingzoo.farama.org/) or SMAC (https://github.com/oxwhirl/smac) that would highlight the potential of these algorithms in more compelx scenarios. This would help address the point that world models can become unstable and the value of strategic diversity in scenarios that support a much wider array of behaviors. 1. Scaling experiments. For example, when do prediction improvements level off when adding more policies? The experiments only examine adding 2 policies, which is a sparse sample of the space of strategies for most games. The evaluations would benefit from other baselines to compare. What other algorithms could be used aside PSRO? The full evaluation (last experiment) would benefit from a set of ablation studies. This could easily replace some of the planning experiments as the ablations would examine similar capabilities. I ask for ablations as these will be more convincing that the parts of Dyna being used add benefit over PSRO. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - [Q1] How do the results here scale? - This is a broad question, but for any RL algorithm it's important to understand the scaling and sample efficiency of the method. For example, how does increasing the amount of warm-start background training impact asymptotic performance or convergence rate? How does increasing the number of policies used to train the world model alter prediction performance? Reward obtained during training? - [Q2] Figure 2 - This figure is hard to understand and summarize. It may help to instead plot the average performances for each model combination and omit the heat maps. - [Q3] Figure 3 - What is "Plan: Model"? - Why does "Plan: Model" start strong then decay? (Opponents get better?) - Please define the legend (labels like "Plan: Model") as the names are not immediately obvious. - [Q4] Figure 4 - Is it correct to conclude that concurrent background planning has no benefit? - [Q5] What ablations can add more detail to Dyna-PSRO model? - What evidence is there that DynaPSRO was learning and responding to strategic diversity during these experiments? - [Q6] Section 3.2.2 DT planning - It seems like warm-start planning is sufficient to capture world model benefits. Why is that? - Consider adding an ablation: remove DT planning but retain BG:W and BG:C. This could clarify how much help the world model is for training the policy, independent of evaluating future states during execution. - [Q7] Figure 5 - Why do the DynaPSRO traces stop partway through the real experiences? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Yes. The work is focused on integrating world models into game playing agents and recognizes the preliminary nature of this work along with the potential risks introduced by using a simulation (world model) for decision making in real world applications. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: In response to the *summary* of this paper, we wish to clarify that the primary contribution is a learning-based game-solving algorithm Dyna-PSRO (L64--66). While we show benefits to model-based MARL we do not claim this as a major contribution (L62--63). The Dyna-based learner uses experiences generated from play against opponents in both the world model and the real game. The planning results demonstrate that "pretraining" on _planned_ experiences affords effective warm-starting; moreover, that decision-time planning offers further benefits---including potentially computing much stronger policies. The final sentence refers to a "Dyna-PSRO model" and a "PSRO model", which suggest a potential misunderstanding. Vanilla PSRO and Dyna-PSRO are both game-solving algorithms that produce solutions to games (distributions over policies following a solution concept). They compute these solutions in a way that builds and leverages an empirical game model, but are not modeling algorithms. *Originality* See above and Fig~1 in the manuscript. *Quality* The games we study are _apparently_ simple, but in fact challenging from both the perspectives of response learning and game solving. These games are partially observable, zero- or general-sum, and some contain RGB observations. Each of these attributes presents significant learning difficulties and requires ample effort to get algorithms to effectively work within. Therefore, we think that our selected games give us good insight into the general performance of the algorithms. *Clarity* We have responded to the questions and suggestions below. Each experiment already contains a sentence that summarizes the main conclusions of the experiment (L223--224; L272--273; L310--312; L356--358). *Significance* Our intended audience comprises anyone concerned with reasoning about multiagent situations from a game-theoretic perspective. This includes researchers in MARL and beyond, whether or not they already have interest in world models. *Weakness 1* We would love to perform experiments on more games. However, each run of PSRO comes at an exceptionally large cost, making each game evaluation exceedingly expensive. Our results already have required the devotion of a large amount of compute to configure and run. *Weakness 2* The MeltingPot games we employed here are deceptively complex and hard to learn in: general sum, partially observable, and RGB images as observations. Similar to our response to Weakness~1, we would evaluate the methods on more games given access to more resources. *Weakness 3* The strategic diversity experiments verify the claim that increases in diversity correspond with performance gains. Improvement is limited when the sett of policies fully covers the game tree. Therefore, the number of policies resulting in a "level off" is highly game dependent, making any general scaling claims tenuous. *Q1* This work does not purport to introduce the concept of planning (background or decision-time). However, given a perfect world model, in the limit the cost of computing a policy would be free as all learning could be accomplished in planning. Similarly, in the limit of strategic diversity (crudely here meant as number of policies and size of dataset combined), the world model can be expected to learn near-perfect predictions. The inaccuracies of this "in the limit" world model would be from game state information unobserved by any player, of which, a general statement about this performance difference cannot be made. *Q2* We have included the suggested plot in the rebuttal pdf (and in our appendix). Additionally, we included the a cross-entropy version (see reviewer Nexz). *Q3* "Plan: Real" and "Plan: Model" are the performance of the planner measured by play in the real game and with the world model, respectively. We have updated the figure's caption to clarify this point. As for the shape of the "Plan: Model" curve, this is because the planner starts learning exclusively from data from the world model, and then transitions to learning exclusively from real data. The decay is a result of the planner unlearning errors from the world model, and instead learning a more effective policy in the real game. In all of our experiments, the opponents are fixed (i.e., not simultaneously learning). *Q4* We found concurrent background planning had lower variance and thereby more constituent in training. In Fig 16, with the worse world model, the impact of concurrent background planning is more pronounced. However, in Fig 4, it is much smaller. Therefore, we do not believe it is correct to conclude that concurrent background planning has no benefit. *Q5* The experiments of 3.1 (Strategic Diversity) and 3.2 (Response Calculation) serve as isolated experiments to explain the benefits of Dyna-PSRO. The reduction in regret of both methods suggests an increase in the strategic diversity. *Q6* We have completed the suggested experiment where BG: C is evaluated without including DT. The results are in the general rebuttal pdf and we will add it to the appendix. We evaluated three versions of BG: C that vary in the proportion of the batch-size that is planned experience (with the proportion shown in parenthesis in the legend). We found that BG: C did not measurably improve the planner, and as the proportion of planned experience increases, the performance degrades. We speculate that this is because the planner better fits its policy to interact with the world model instead of the real game. *Q7* The line for Harvest: Categorical stops because it has converged (remains flat on the x-axis for a duration). The lines for the MeltingPot games go as far as we were able to run the algorithms in time for submission. Running either of these algorithms comes at a substantial computational cost due to repeated invocations of deep RL. --- Rebuttal Comment 1.1: Comment: Thank you for the thorough replies, revised figures, and additional experiments. Responses below: - [Q1] Thank you for explaining some of the extremes of configurations. I had intended to ask about empirical results but I understand the computational burdens of these experiments. In light of that, it may help to address the question of scaling in terms of time or memory demanded from these algorithms, as they are clearly fairly intensive to run. This may also be something to address in the limitations section. - [Q2] Thank you! These are much easier to read. - [Q4] "In Fig 16, with the worse world model, the impact of concurrent background planning is more pronounced." Figure 16 certainly revises my perspective, I had not noticed this nuance. So is the claim then something like: concurrent background planning can mitigate a poor world model? - [Q5] Acknowledged. - [Q6] Interesting result! So it seems that DT is an effective tool to mitigate the (possible) biases introduced in concurrent background planning. Or is there another better hypothesis? - [Q7] Understood. Thank you for clarifying. --- Reply to Comment 1.1.1: Comment: We're glad that our rebuttal was able to better clarify things for you, and thank you for continuing to engage with our work. We hope that the changes inspired from our discussion clarify things for future readers. We have responded to the unanswered questions below: - *Q1* We agree that more discussion could be included, and have already included discussion about the walltime performance in our response to reviewer Nexz. In effect, Dyna-PSRO theoretically should run roughly twice as slow, because the limiting process is the number of gradient steps. Any additional walltime differences are a result of different compute settings/requirements, or interprocess communication latency. On our hardware, described in the appendix, Dyna-PSRO took roughly 12 hours to compute a response whereas PSRO took roughly 2 hours. This additional gap is because the limiting process in our settings was generating _real data_ (as a result of decision-time planning). If we had additional CPUs to generate more data in parallel we could match the data throughput demanded by the learning process and reduce the walltime. We have included this discussion in the appendix. - *Q4* In our opinion, we haven't performed enough analysis to fully support the claim that "concurrent background planning can mitigate a poor world model." Instead, we claim that _multi-faceted planning_ (BG+DT) is unlikely to harm best-response calculation and can have a potentially large benefit when applied effectively. We would also claim that BG+DT generally performs better when use in combination, as opposed to used in isolation. - *Q6* We would agree with the reviewer's speculation that (paraphrased) DT somehow is mitigating possible biases introduced by BG:C. One idea is that because DT is producing "higher quality" experiences that somehow the magnitude of these examples in the batch can override conflicting examples produced by the world model. This would need further experimentation, but is indeed an interesting trend and worth looking into in future work. Please let us know if you have any further questions, comments, or suggestions and we would be happy to respond to them within the discussion window.
Summary: This paper introduces a new approach to PSRO algorithms, where a world model of the environment is learned concurrently to the iterative PSRO strategy expansion. Strengths: - The authors are right that the problem of having to re-learn policies from scratch is a large problem in the PSRO literature. Therefore, the idea to co-learn a world model alongside the expansion of the empirical game, in so taking advantage of the diversity of experience created by agents with slightly different best-response targets is an interesting approach to this problem. To the best of my knowledge this is also a novel solution to this problem. - I really like the presentation of the paper, and in general I think the authors do a very good job in terms of analysing the different moving parts of the framework in a reasonable manner. Weaknesses: I have a few concerns with the paper, however none of these necessarily game-changing in my evaluation of the work. - I think the greatest misgiving I have with this work is that the related work seems to miss quite a large collection of PSRO papers that probably deserve mentioning. PSRO-style algorithms is a fairly small research area and I am surprised that the authors fail to make mention to many variants. In particular, as there is a section on strategic diversity itself in the paper, it seems odd that the authors have failed to comment on the line of works on diversity-based PSRO frameworks. For example, [1], [2], [3], [4], [5] are all diversity PSRO approaches. It also fails to place itself in the literature involving PSRO algorithms that attempt to speed up convergence times such as [6], [7]. - Furthermore, I was additionally surprised at the lack of comparison to NeuPL [8] which is another population-based framework attempting to similarly deal with the best ways to transfer information between agents in the population. - I do not necessarily believe that the authors need to benchmark against all of the approaches that I have listed. I do however believe the paper still needs work in terms of placing itself within the current literature on PSRO and other population-based frameworks. - Based on the above, my score is set at a borderline accept. However, I am willing to revise this upwards upon seeing a better framing of this work in the current literature. REFERENCES [1] Policy Space Diversity for Non-Transitive Games - Yao et al. 2023 [2] Open-ended learning in symmetric zero-sum games - Balduzzi et al. 2019 [3] Modelling behavioural diversity for learning in open-ended games - Perez-Nieves et al. 2021 [4] Towards unifying behavioural and response diversity for open-ended learning in zero-sum games - Liu et al. 2021 [5] A unified diversity measure for multi agent reinforcement learning - Liu et al. 2022 [6] Pipeline PSRO: A scalable approach for finding approximate Nash equilibria in large games - McAleer et al. 2020 [7] Neural auto-curricula in two-player zero-sum games - Feng et al. 2021 [8] NeuPL: Neural Population Learning - Liu et al. 2022 Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Discussion starting line 208: I think this whole section on strategic diversity is very interesting. However, I am not entirely convinced as of yet that there does exist strategic diversity between the PSRO policies, especially as the changes in performance of Fig. 2 when the PSRO policies is only minor. My question to the authors is: was the strategic diversity of the PSRO policies actively checked? If so, am I missing some proof of that here? If not, why was it not checked? In my experience with PSRO policies, it is quite easy to end up with policies that act similarly even under the presence of different training data. - Discussion starting line 264: Could the authors comment a little further on the gap between the world model and the real game - do they foresee any scenarios where this gap may be far more damaging to performance than in the proposed experiments? Do they have any intuition for why it does not see to matter as much as one would expect in these experiments? - Why is this a better transfer learning method than NeuPL? I would be interested to hear the authors thoughts on this. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I think the authors actively engage with the limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your commendation of our analysis, as well as your useful suggestion to incorporate more extensive discussions on PSRO in our related work section. Based on your input, we have included a paragraph on various PSRO-related algorithms. The key additions include (and refer all suggested texts): - Heuristic diversity measures can enhance PSRO's efficiency by reducing the number of required trained policies. Such methods can typically be directly applied to Dyna-PSRO. - Pipeline PSRO exploits parallelism in best-response, largely orthogonal to approaches developed here. - Our usage of strategic diversity shares some common ground with using "behavioral diversity" concept that is used as a heuristic for strategy exploration, a point we have now included in our strategic diversity section. The reviewer also pointed out the lack of a comparison with NeuPL. While we understand the perspective, we don't see these two methods as directly competitive. They tackle different challenges in the realm of transfer learning for game solving. NeuPL's main emphasis is on transferring policy parameters, whereas Dyna-PSRO is centered around transferring a world model. So we believe neither must be necessarily better than the other. Instead, future work should investigate how to integrate these two algorithms and reap both benefits. A significant limitation of NeuPL, however, is its necessity to predefine the number of policies for the empirical game. This hyperparameter can significantly influence the game-solving process, and its implications are not fully understood. We have addressed your other questions below: > PSRO literature review. A complementary route to improve the efficiency of PSRO is to find methods that require computing fewer total policies when building an empirical game. Methods following this route define a measure of \emph{diversity} and modify the strategy exploration of PSRO to produce policies that maximize diversity. Liu et al. classified these measures into two categories: behavioral, embodying state-action occupancy of policies, and response, representing the reward profile against an assortment of co-players. In their work, Nieves et al. proposed a geometric interpretation of behavioral diversity, measured using determinantal point processes. Balduzzi et al., on the other hand, suggested a reward diversity measure that fosters specialization by urging agents to disregard their vulnerabilities to other players. Yao et al. presents a diversity metric that has the additional benefit of better approximating Nash equilibrium. Unifying these, Liu et al. brought forth a metric considering both behavioral and response diversity. Beyond diversity methods, other complementary routes for improving the efficiency of PSRO have been investigated. Feng et al., for example, advocates for meta-learning the empirical game solver. McAleer et al. introduced a method to conduct multiple strategy exploration steps concurrently, which reduces the walltime of PSRO---a distinct but similarly important measure of efficiency. Liu et al. presented the idea of learning a single policy that represented the entire policy population. This allows the policy parameters to be shared throughout the population, but requires prespecifying the number of policies that will comprise the empirical game. > Q: ... was the strategic diversity of the PSRO policies actively checked? ... Thank you for pointing out we forgot to include this analysis in the manuscript. We have added two comparisons of the set of policies to the appendix. The method we use to compare policies is by their action agreement. To do this we collect 30 episodes of each combination of the policies. Then we check the proportion of actions that each pair of policies agrees on across all observations. The results across all episodes are as follows: |Policy ID|0 | 1| 2| 3| |:-- |--: |--: |--: |--: | |0 |1.0000|0.1262|0.1277|0.1279| |1 | |1.0000|0.8868|0.8269| |2 | | |1.0000|0.8961| |3 | | | |1.0000| The results across all episodes except those containing the random policy are below. This comparison is meant to highlight the similarity under more strategically salient episodes. |Policy ID|0 | 1| 2| 3| |:-- |--: |--: |--: |--: | |0 |1.0000|0.1265|0.1272|0.1283| |1 | |1.0000|0.8542|0.7671| |2 | | |1.0000|0.8608| |3 | | | |1.0000| > Q: Could the authors comment on the potential impacts of the gap between background planned episodes versus episodes in the real game? (Paraphrased). It is possible that planning can be damaging to the performance of an agent for a variety of reasons: inadequately trained world model, catastrophic prediction error (e.g., in a high risk situation), etc.. We speculate that as long as the world model is trained sufficiently to encode some patterns that exist in the dynamics we should hope to see some positive transfer, and therefore, benefit of using a world model. Any errors should hopefully be corrected once training includes real transitions. Although, practitioners need to be mindful of how the learning policy treats the importance of the respective real and planned experiences. For example, consider a that learner consistently receives a batch of 2 experiences: one planned experience with a catastrophic error, and the equivalent real experience. In this degenerate and constructed example, the learner may fail to learn an effective policy. Yet, if more emphasis is placed on experiences drawn from the actual game, errors may be avoided. In our experiments, we either transition to exclusively learning from real data, or to learning from batches that heavily favor real data (at a 75/25 split), which is why we see less failure cases. --- Rebuttal Comment 1.1: Comment: Hi, thanks for taking the time to clarify my concerns 1) NeuPL - I think your reasoning in not comparing the two is fair, and I agree the combining the two may be beneficial in the future. 2) PSRO lit review - I think what you have written is reasonable and should be included in the paper. This was my main concern so I will adjust my score to a 6. And thank you for the table on strategic diversity, I think this also worth incorporating into the manuscript.
Summary: The authors consider learning world models for deep reinforcement learning in combination with the construction of empirical games through PSRO. They first show that world models benefit from training on a diverse set of strategy profiles as can be generated through PSRO meta-game solvers. They then empirically show that PSRO best responses can enjoy sample efficiency benefits by training with simulated world model experience. Finally, they present Dyna-PSRO, in which PSRO best responses make use of a world model trained on all available experiences collected thus far in a run of the PSRO algorithm. Dyna-PSRO provides lower-regret solutions with higher sample efficiency than PSRO without a world model. Strengths: - The paper is very well written and presented, and the experiments are well designed. - World models are seeing increased use in the RL community, and PSRO is one of the more practical and general methods currently available for finding approximate game solutions. This paper provides insights on how to properly combine the two and make improvements to PSRO's sample efficiency, which is one of its largest issues. - The proposed Dyna-PSRO method is sound. - While many implementation details are not present in the main paper, the appendix describes these details thoroughly. Weaknesses: It would have been nice to see how current high-performing world model methods such as Dreamer, which employs latent state spaces [1,2] might perform with the same approach. It's not immediately clear if experiments like in section 3.3.2 would have had the same outcome. [1] Hafner, Danijar, et al. "Dream to Control: Learning Behaviors by Latent Imagination." International Conference on Learning Representations. 2019. [2] Hafner, Danijar, et al. "Mastering diverse domains through world models." arXiv preprint arXiv:2301.04104 (2023). Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Was there a strong reason for adapting a Dyna-based method over other existing world model approaches? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: All limitations have been adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the kind words in your review. The main question posed pertains to our choice of a Dyna-based method as opposed to other approaches. The Dyna architecture is notably broad, encompassing any learner that combines learning, planning, and acting. The reason behind our specific world model implementation, and its application, was a deliberate emphasis on introducing complexity incrementally. This approach allowed us to disentangle the many parallel moving parts that comprise Dyna-PSRO, and better understand what factors contributed to its success. As noted by the reviewer, a trending class of world-model methods employ latent state spaces (e.g., Dreamer). The reviewer is also correct to speculate that it's not clear how this class of methods would perform on the decision-time planning experiments (or, on any other experiments for that sake). We have amended our discussion that the trend we found in planning may not generally hold to all types of world models or planning algorithms. We do think that exploring latent-state space world models is an exciting area for future development that could be leveraged within the Dyna-PSRO framework. --- Rebuttal Comment 1.1: Comment: Thank you for your replies. It would be great to also include in the paper the reasoning for choosing Dyna that you provided here. --- Reply to Comment 1.1.1: Comment: Thank you for your continued engagement with our work. We will update the paper to make the choice of Dyna more clear as suggested. Thanks!
Summary: This paper describes combining two things: training a world model of a game, and doing Policy Space Response Oracles (PSRO) on the game. Doing PSRO involves getting a lot of episodes from the game (episodes are used to train the RL best-responses, and also to estimate the payoffs of the empirical game). The novel algorithm in this work (Dyna-PSRO) can be thought of as a modification of PSRO where those episodes are **also** used to train a world model (which is essentially a learned simulator of the game engine). Then, the world model is used to improve the training process of the best-response policies. Through experimental results, the authors show that this improved training process (based on Dyna) can cause the best-response learners to learn a stronger policy than the normal method when using the same amount of interactions with the real game environment. It does this by training the policy using trajectories from the world model (in addition to the usual trajectories from the real game environment), and by equipping the agents with one-step lookahead planning during training. This paper showcases experiments on the Dyna-PSRO algorithm in three games, and Dyna-PSRO outperforms PSRO in all three, as measured by an approximation of NashConv. The paper also shows experiments to measure the quality of the learned world model, to test the hypothesis that Dyna-PSRO results in a good world model. I think the paper has some flaws* in its current form, but the core work of the paper is good, the charts are beautiful, and the results are strong. --- *Edit: many of the abovementioned flaws were addressed during the rebuttal/discussion period. Strengths: - Overall, the paper is well-executed. - It is well-written and polished. - There has clearly been a lot of time and effort put into the engineering and writing of this paper. There are 4 sets of thorough experiments in the main paper, and more in the appendix. - The figures are extremely readable. - The results concerning the performance of the best-response policies are impressive. - The effort will surely be helpful to future researchers: the research directions of (1) improving the efficiency of PSRO response calculations and (2) training better world models should continue to flourish, and the work presented in this paper contributes to both. - The research direction seems natural, especially in the direction of using world models to improve the performance of PSRO. Weaknesses: I think the paper could be better in explaining or hypothesizing the "why" for a lot of things, even if just qualitatively. I think the paper states some conclusions too strongly. I think some things are not explained well enough and are confusing to the reader (at least, to me): - SumRegret metric: - It's really not clear from the main paper how SumRegret works (even though it is explained in the appendix). This could be clarified by defining the terms "method" and "combined game". - Also, I would feel a lot better if I saw results measured by an alternative metric, where the deviation set is the set of **all** policies. The $max_{\pi_i \in \bar{\Pi}_i}$ could then be approximated by just training one more response policy (as if doing one last epoch of PSRO). This seems like it would be a more accurate approximation of the Nash Conv. Is there any reason to use the metric in the paper instead of this? - Empirical Game Solution not described - Since the settings here are general-sum, it's probably important to specify what solution concept is used for the meta-strategies in the main paper (even though it is included in the appendix). - Experiments in Section 3.1 Strategic Diversity - Looking purely at Figure 2, the conclusion "Overall, these results support the claim that strategic diversity enhances the training of world models" does not ring true to me. For example, there are three world models which perform better on the metric (accuracy) used in Figure 2 than the most diverse one, for Observations. - Even if I look in the appendix at E.1, there doesn't seem to be significant evidence to support the conclusion: multiple world models have similar recall scores than the most diverse one, and the one trained without the random policy seems to have better scores. - I would be interested in seeing the cross-entropy loss instead of (or in addition to) the accuracy. - The discussion of the Decision-Time Planning results (3.2.2) seems incomplete: - "**The main outcome of these experiments is the observation that multi-faceted planning is unlikely to harm a response calculation,** and has a potentially large benefit when applied effectively. These results support the claim that world models offer the potential to improve response calculation through decision-time planning." (emphasis mine) - However, Figure 4 does show that decision-time planning causes the response to be *worse*: The solid blue line (top) has no decision time planning, and the dashed gray line (second from the top) has decision-time planning, and performs worse. - It would be nice if there was some discussion about this, perhaps an intuitive/qualitative reason why this is. - Dyna-PSRO results need more details (Figure 5): - For each experiment, how many policies (iterations of PSRO) were there? - Does each policy train for a fixed number of steps, or until some measure of convergence is reached? - Was "policy" vs. "strategy" ever defined like this before? In my opinion, we shouldn't define these terms like this, because they are usually considered synonymous. The terms I'm familiar with are "policy" or "strategy" for the former, and "meta-policy" or "meta-strategy" or "meta-strategy distribution" for the latter. Just my opinion! - I was very confused by the definition of World Model while reading the paper. - Even after reading it through entirely, I was under the impression that each player had their own world model, and that it implicitly modeled the actions of the opponent. - If one misses the bold notation of the definition of agent world model from line 137 to 141, it's easy to think that this is the case, especially since the phrasing is that "the agent learns and uses a ... world model" (instead of, say, "the agents learn and use a ... world model). - On one hand, the formal definition given for an "agent world model" is technically accurate, and I am just dumb. On the other hand, I suspect many of us are dumb, and will be similarly confused upon reading the paper. (Also, I'm not **that** dumb: it's really hard to tell that the O and A are bolded!) (Also, even if some of us are not dumb, we are likely lazy and will gloss over the explanation that boldface means joint.) This is all to say that I would suggest explicitly stating that the world model takes as input an observation and action from **each** player, and returns an observation and reward to each player. And that the world model does NOT model the actions of any player. - The bolding is nice, but it would be less confusing to **also** say "strategy profile" or "joint strategy" anytime this is meant instead of just "strategy" and something bold, as it's very easy to miss or forget what something bold means (plus, it seems incorrect to call a strategy profile a strategy). Also, maybe emphasize that sentence that explains what boldface means, so that readers don't miss it? - For example in line 774 and 775 of the appendix: - "This is typically not tractable, but instead draws are taken from a dataset generated from play of a behavioral strategy **σ**. And the performance of the world model is measured under a target strategy **σ∗**" - should be "strategy profile" and "target strategy profile" - and same in Line 159 and 160 of the main paper, and throughout section 3.1 Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The world model is deterministic. Can it easily be made stochastic? - The Harvest game has stochasticity in the apple regeneration, right? Do you think the deterministic world model affected impacted anything in the experiments? - Section 3.1 Strategic Diversity - I'm guessing the world model implemented for the categorical harvest game outputs some logits that can be interpreted for each cell as a probability distribution over the possible entities in that cell. Then the world model takes the argmax of each cell and returns the observation? Or does it sample from the probability distribution? If not, why not? - What are the details of the weighted-average cross-entropy objective? - Did the empirical games often have multiple Nash equilibria with different values? If so, do the authors think it would change anything if specific Nash equilibria were sought -- e.g. welfare-maximizing? - Section 3.2.1 and 3.2.2 - Section 3.2.1: "The planner employs the x world model and the opponent plays the previously held-out policy." The opponent plays the previously held-out policy in all 3 cases, right? (Baseline, Plan: Real, Plan: Model) - It's stated that the planner switches from planned to real experience after 10,000 updates. Does this correspond to 1,280,000 experiences? - In Figure 3 and 4, I would be interested in knowing how the baseline experiments do when they are run with as much experience as the amount of real+planned experience that the planning models get. - On Figure 3, if the planning learners continue to outperform the baseline ones even on the real+planned exp x-axis, then what is the reason for that? Does the baseline policy settle into some local maximum that it can't escape, while the pretraining on the world model experience makes the planner walk into some better part of the policy space? - Is there any explanation for why the baseline returns are so variable and spiky compared to the planner's? e.g. in Figure 3. - Section 4: Is the decision-time planning only used to train? Do the final learned agents use any decision-time planning? - Seems like a lot of problems were the result of a subpar World Model - Could the performance of the world model be improved? - In the experiments of 3.1 - Are the world models trained to convergence? Do they achieve near-0 training loss? What do the training curves look like? - Did you try training the world model where one or more of the policies was a PSRO policy that played epsilon-random? - Out of curiosity, it would be nice to know what the results look like with wall-time on the x-axis. - Or, how long does it take to query the world model? How long does it take to query the real game environment? - Will the code be open-sourced? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for engaging so extensively with our work and for the kind words on its effort and quality. We have done our best to reply to your comments (paraphrased here) below within the word limit. > SumRegret metric. We have included more text defining the terms _method_ and _combined game_, and a sentence describing the computation. Regarding alternative metrics, we understand the suggested metric to be equivalent to evaluating our regret results without the final datapoint from each graph. Combined-game regret should provide at worst an equivalent approximation of regret as we are measuring each method's performance against _an additional_ set of policies (those generated by the other method). We do agree with the general sentiment of needing stronger ways to approximate regret. > Empirical Game Solution not described. We have updated the text to explicitly mention Nash as our solution concept. > Experiments on Strategic Diversity. It is true that diversity did not strictly help on _observation accuracy_, when taken in combination with _reward recall_ we would suggest that the last world model performed better in aggregate. Nevertheless, this is a claim that not easily measured, so we have softened the claims of our results. We have included the requested cross-entropy loss results in general response pdf. The world model trained with three policies has the lowest observation loss, and a comparable reward loss with several other models (but they are beaten by the 2 PSRO policy world model). > Discussion of Decision-Time Planning Results We have emphasized the claim that *multi-faceted* planning is unlikely to harm response calculation. Without BG:~W, the DT planner must first learn a value function that is effective on predicted states, before it can effectively learn a policy. Since our search procedure is shallow, any errors from the value function many dominate the action-value estimate, producing a poor policy. This is evident by the slow learning seen in section 3.2.2. We have included this in the discussion of this section. > Dyna-PSRO Results The number of iterations for each experiment were as follows (Dyna-PSRO / PSRO): - Harvest Categorical: 7/25 - Harvest RGB: 7/17 - RWS: 8/17 The primary determiner of the number of iterations we ran each experiment for was due to the computational limitations. Given that upper bound, our results were lower-bounded by the seed that completed the fewest iterations. Each policy training was run for a fixed number of steps, following the decision-time planning method details. We have updated the methods section to make this detail more clear. > Was "policy" vs. "strategy" ... We employ "policy" as typical in the RL literature, to refer to the object generated in a BR calculation. We use "strategy" as typical in GT in the context when GT reasoning is more salient than BR (RL). A "meta-strategy" as used in some PSRO literature is just a "strategy" in the empirical game. > Definition of World Model We have updated the text around the definition of the world model to emphasize that the inputs and outputs of the world model are for joint and that a single world model is shared across all agents (including the rephrasing suggested by the reviewer). > Bolding We have edited the entire paper to explicitly say joint in referring to elements where it applies. > Stochastic World Model We agree that a stochastic world model would provide greater potential fidelity. The deterministic world model introduced modeling errors in the player spawn locations and item spawns/regeneration. This had a large impact on BG planning, and a minor impact on DT planning, based on rollout length and the conditioning state. Training and using (via planning) a stochastic world model, however, would increase complexity and also complicate interpretability. We consider this an important future direction for extending our methods. > Q: Sampling transitions from the world model details. The argmax of each cell is selected. This was done to so that we could have results that were easy to interpret through standard classification metrics (e.g., accuracy, recall), and to be consistent with how we would leverage the world model in subsequent experiments. > Q: Did the empirical games often have multiple equilibria? We did not extensively evaluate the empirical games for multiple equilibria. We wouldn't expect the chosen solution concept to make a measurable difference, because Dyna-PSRO doesn't add any additional priors that would shape learned policies. > Q: Section 3.2.1 and 3.2.2. a) Yes. b) Yes. c) The requested experimental result is in the general response pdf. From our preliminary experiments we knew that the baseline had largely converged by this point, surprisingly it appears converged even given substantially more training. d) The baseline is only slightly more variable than the planner, because of the contrast of the graph. We'll try and make this easier to see across all figures. > Q: Decision-time planning only used in training for Dyna-PSRO? Yes. > Q: Subpar World Model The performance of the world model could be improved, and improvements in it should correspond to improvements in planning and Dyna-PSRO. The world models training curves quickly converged to a non-zero loss (0.2-0.3), with further diagnostic metrics We did not try training more world models on other types of policies than mentioned in the manuscript. > Q: Walltime of algorithms? In our experiments PSRO took roughly 2 hours per epoch, whereas, Dyna-PSRO took roughly 12 hours. One should expect the lowerbound walltime difference for Dyna-PSRO to be roughly twice as slow, because the limiting process is performing the gradient steps. All other walltime differences are a result of different compute settings/requirements, or interprocesses communication latency. > Q: Will the code be open-sourced? Yes. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: 1. SumRegret metric. - > Regarding alternative metrics, we understand the suggested metric to be equivalent to evaluating our regret results without the final datapoint from each graph. Combined-game regret should provide at worst an equivalent approximation of regret as we are measuring each method's performance against an additional set of policies (those generated by the other method). We do agree with the general sentiment of needing stronger ways to approximate regret. - Oh, yeah, I can see how that's equivalent. In an ideal world, a stronger way to approximate nashconv might be to train response policies for longer so that they more closely approximate the exact best-responses (if they don't already near convergence by the end of training?). 2. Experiments on Strategic Diversity. - Thanks for the clarifications and the additional charts. It's interesting that the most diverse world model has the lowest observation cross-entropy, and it makes me feel more confident about the thesis that diversity makes better world models. I am also happy to hear that the claim in the paper was softened. 3. Discussion of Decision-Time Planning Results - hm, yeah I guess I missed the emphasis on *multifaceted* planning in my review. Seems reasonable. Is the claim that an agent with DT + BG is unlikely to be worse than one with neither? Or is it that DT+BG is unlikely to be worse than an agent with DT, and unlikely to be worse than an agent with BG planning? Seems like the natural interpretation is the latter. If so, it seems useful to have a chart comparing DT+BG to BG (but I appreciate that it's probably hard to figure out how/where to put such a chart). 4. Dyna-PSRO Results - Thanks for the details. To reiterate, I feel strongly that these details are important and should be included (in the appendix at the very least). 5. Was "policy" vs. "strategy" ... - > We employ "policy" as typical in the RL literature, to refer to the object generated in a BR calculation. We use "strategy" as typical in GT in the context when GT reasoning is more salient than BR (RL). A "meta-strategy" as used in some PSRO literature is just a "strategy" in the empirical game. - Yes, I agree it's not wrong to say that the strategy in the empirical game is a strategy. But I still think it's confusing because without knowing the way the terms are defined in the paper, a "strategy" could be interpreted as a strategy in the empirical game, or as a strategy in the original game. Using the term "meta-strategy" avoids this ambiguity. But maybe that's just me! 5. Q: Section 3.2.1 and 3.2.2. - Thanks for the new chart in the pdf. I agree that it's surprising that it's converged even given more training. Do the authors understand or have a hypothesis for why this is? This seems pretty counterintuitive to me, and makes me wonder if the explanation could be something like a software bug. 6. Q: Walltime of algorithms? - Thanks for the walltime measurements and explanations. I think it would be useful to others to put those in the appendix somewhere, including the explanations. ### summary Thanks for the thorough rebuttal. I appreciate the clarifications that alleviated some of my concerns about the sumregret metric and strategic diversity and DTP claims, the additional details about Dyna-PSRO experiments and strategic diversity experiments, the loosening of some claims, and that my suggestions for reducing confusion for the world model definitions and "joint" definitions were taken. I'm much more confident in recommending this paper for acceptance now. --- Reply to Comment 1.1.1: Comment: Thank you for your continued thoughtful and friendly engagement with our work. We're sorry that points in our rebuttal were terse and may not have fully answered each point. We've responded to specific items below. 1. **SumRegret Metric:** 1. Yes, you could imagine performing extensive post-run experiments to test for convergence. For example, building off the reviewer's suggestion, running _several_ independent BR calculations under different settings for extended periods of time. While this should improve the estimate, it lacks principled grounding on how and why the operation is performed, which leaves it unsatisfying---at least to us. Moreover, it really suggests a flaw with the BR operation that is used within PSRO. Addressing that would fix upstream problems and make the additional analysis unnecessary. Now that's all rather idealized. In our opinion, one large problem in MARL is that RL algorithms often need tuning and exploration schedules modified to specific opponent policies. This is both a result of the sensitivity of RL algorithms, and that the coplayers can dramatically vary the difficulty of the learning problem encountered. 2. An idea that your suggestion sparked was to study the stability of the combined game. The combined game could be solved and a BR computed against it to see if it is "converged"/stable. We won't be able to run this experiment and analysis within the discussion period, but we'll look at it in future work. Thanks! :) 2. **Baseline Performance:** 1. Building off the previous point, we strongly suspect that the suboptimal convergence given longer compute is a _hyperparameter generalization problem_. 2. To explain this problem, consider that our BR hyperparameters were tuned for play against BR(BR(Random)). That is we optimized for the performance of BR(BR(BR(Random))). We chose this policy to optimize against because it was the first "strategically interesting" policy we could construct. Random performs poorly, and BR(Random) effectively did not need to consider play against another policy; therefore, BR(BR(Random)) was the first policy that should learn about multiplayer interaction. 3. Any differences between the policy BR(BR(Random)) and a new policy we're playing against X can present opportunities for RL algorithms to fail to find an optimal solution. Given that policy-space is exceptionally large, and poorly understood, this can mean that we need dramatically different hyperparameters to compute optimal BR against different points in the space. 4. For example, playing against the random policy is trivial and requires little exploration. Whereas, playing against a strong opponent could require extensive exploration. Exploration can be critical because in the Gathering/Harvest game, the "beam/laser" action that tags the other play out of the game can have extremely high value but is exceptionally hard to learn when exactly to use it (requiring the other agent be closely in front of them, and the results of the action are not part of the agent's observation as their temporarily removed from the game and then respawn at a spawn point which is often not near the apples). The opponent policy, X, in our experiments is from a later iteration of PSRO, so we'd speculate that because the policy is "stronger" alternative hyperparameters would be more performant. 5. Zooming out a bit, we expect this problem troubles many works in MARL. As this problem is orthogonal to the point of this study, the stopgap we employ is to always favor the baseline methods when configuring hyperparameters. For example, we do not tune the planning methods and they use the hyperparameters tuned for the baseline. Therefore, the planner is equally susceptible to this problem, and the baseline is better posed to be more performant---having been tuned. Frankly, we were surprised at how well the planner did in 3.2.2, which motivated a lot of the ancillary experiments and the cautions throughout the text about the magnitude of results. 6. We have done our best to test and verify the correctness of our system, but concede it is possible that a software bug exists---especially since DRL systems can be exceptionally hard to test. As the planner and baseline *largely* overlap in used code, we would expect a bug to plague all methods (the planner only has additional services providing another datastream).
Rebuttal 1: Rebuttal: We thank the reviewers for their time and providing thoughtful feedback on our work. We are delighted to hear you found the paper well written [Nexz, Lma6], the experimental analysis well designed [Lma6, WC2X], and the core of the work sound [Nexz, Lma6, WC2X]. We address reviewer-specific comments below and will incorporate promised changes in the final version of the manuscript. The specific comments are supplemented by the attached PDF containing the following five figures (in order): 1. Comparison of world models by their cross-entropy loss [Nexz]. 2. Simple depiction of Figure~1 using bar graphs [s5k7]. 3. Simple depiction of the original world model comparison [s5k7]. 4. Extended training of the baseline learner in the decision-time planning experiment [Nexz]. 5. Analysis of concurrent background planning without decision-time planning [s5k7]. Pdf: /pdf/38b76b99062cecfa6ed6865048850a00995fa6cc.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
LightSpeed: Light and Fast Neural Light Fields on Mobile Devices
Accept (poster)
Summary: This paper describes a novel representation for learning view synthesis from a set of input images with known camera poses. They parameterize a classical two-slab 4D light field using a K-Planes representation (using 6 feature planes). Feature queries are processed through many layers of 1x1 convolutions before being post-processed by a super-resolution network to produce the output image. Similar to previous neural light field methods they augment the training data using virtual views rendered using a trained NeRF model. In experiments on standard benchmark datasets they achieve good rendering quality compared to previous methods capable of rendering on mobile devices, and also achieve a fast training time and compact representation. Strengths: The proposed approach of parameterizing a neural lightfield using the K-Planes concept is reasonable and novel to the best of my knowledge. They show convincingly that this approach leads to faster convergence and a more compact representation compared to MobileR2L. They also achieve high-quality rendering with fast rendering speeds even on mobile devices. In some cases they actually outperform the teacher model. They present an extensive evaluation using several benchmark datasets, including both synthetic and real data. They also include an ablation study to consider the effect of virtual views and decoder network size. They also compare implementations on different mobile processors in terms of latency. The presentation is clear and easily understandable. Weaknesses: The results on unbounded scenes especially are lower quality than state-of-the-art (non-real-time) methods such as Mip-NeRF 360. Artifacts such as blurriness and inconsistent shape are clearly evident in the result videos for unbounded scenes (but not for the object-centric scenes). These limitations deserve discussion. The approach of rendering a low-resolution image and then upsampling would seem to be a limiting factor in terms of rendering quality. It would be informative to see what quality is possible with this representation when directly rendering the full-resolution image (without the super-resolution network). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: How do the results compare to state-of-the-art (regardless of rendering speed)? What is the effect of directly rendering a full-resolution image (rather than using the super-resolution network)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: They do describe limitations but I think more discussion of the 360 unbounded results would strengthen the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We thank the reviewer for their positive comments and insightful feedback. We appreciate that the reviwer acknowledges our approach to be novel with fast and high-quality renderings. We further note the reviewer finds evaluations extensive and the paper's presentation clear and easily understandable. In the following, we address the feedback and questions presented by the reviewer.** *** **Q1. Regressing full-resolution image directly.** We thank the reviewer for raising an insightful point! In our preliminary experiments, we found a *slight* increase in visual fidelity if we regress the full-resolution image directly. However, since rendering full-resolution images directly is not feasible on mobile devices in real-time (as pointed out in MobileR2L [a]) and the visual fidelity gains are marginal, we did not pursue this direction of experiments. *** **Q2. About more comparisons.** Our method is tailored specifically for mobile devices, and achieves state-of-the-art rendering fidelity on both LLFF and Synthetic $360^\circ$ scenes compared to prior methods in this domain (Tab. G below). Given the use of teacher NeRF methods to generate pseudo-data for light field training, we can potentially improve the rendering fidelity of LightSpeed by leveraging newer NeRF-based methods. >Table G: **Quantitative Comparison** on Forward Facing and Synthetic $360^\circ$ scenes. >| Method | Synthetic $360^\circ$ PSNR $\uparrow$| LLFF PSNR $\uparrow$ | >| :------------ | :-----------: | :-----------: | >| NeRF | 31.01 |26.50| >| NeRF-PyTorch | 30.92 |26.26| >| SNeRG | 30.38 |25.63| >| MobileNeRF | 30.90 |25.91| >| MobileR2L | 31.34 |26.15| >| LightSpeed (Ours) | **32.23** | **26.50**| *** **References:** [a] Cao, Junli, et al. "Real-Time Neural Light Field on Mobile Devices." CVPR. 2023. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their responses. I have read over the other reviews and the authors' responses. I think they have sufficiently addressed the concerns raised in the reviews and I will maintain my recommendation of acceptance. --- Reply to Comment 1.1.1: Comment: Thank you so much for checking our response and other reviews as well. We appreciate your time and reviewing efforts. If you have any other questions or concerns, let us know and we will make the best of our efforts to resolve them within the open discussion period. Best, Authors
Summary: This paper presents the LightSpeed for real-time rendering on mobile devices. The approach involves replacing the commonly used Plücker coordinates with a light slab representation and implementing multi-level grids similar to Instant-NGP and k-planes. Additionally, It introduced a divide-and-conquer strategy to address the issue of light slab's inability to represent 360-degree objects effectively. Strengths: The proposed approach effectively improves the efficiency with several interesting design. Firstly, it uses light-slab to parameterize ray, which shows efficiency compared to the commonly used Plücker coordinate. Besides, the practice of compressing 4D space through six sub-planes can also improve efficiency. Finally, the divide-and-conquer strategy to solve the problem that traditional light-slab method fail to represent 360 degree objects. Weaknesses: 1. In the abstract and introduction sections, the authors repeatedly claim that existing methods overlook the light-slab parameterization. However, research has already been conducted on novel view synthesis based on light-slab parameterization. Besides Attal et al. mentioned in line 69, there are some other literatures that are related but ignored, such as NeuLF: Efficient Novel View Synthesis with Neural 4D Light Field NeLF: Practical Novel View Synthesis with Neural Light Field Progressively-connected Light Field Network for Efficient View Synthesist All these papers all based on light-slab parametrization and its variants, and thus should be carefully discussed or compared with. 2. Some claims are hard to understand. For example, in lines 51-52, current ray parametrization will encounter issues with the introduction of grid-based representations. However, no specific problem is identified, and the subsequent description only explains that the existing method is "redundant." Also, in line 57, “the high-dimensional stratified-point representation is not feasible for grid-based discretization.”, it is unclear why it was not feasible. In my understanding, nerf is also a neural representation that needs sampling, and it can use the grid-based representation. 3. In the ablation section, the authors only ablate the data requirements and the decoder network size, which is not sufficient. More in-depth discussions are needed to demonstrate the effectiveness of each part of the proposed method. For example, it would be great if the authors could make comparison between the grid-based approach and the pure MLP approach within their own framework, such as replacing the Ray-space Grid Encoder with a traditional frequency encoder or so on. Please carefully check the claimed contributions and ensure that all of them are supported by the experiments. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please address my questions in the weakness section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We thank the reviewer for their positive review and feedback. We appreciate that the reviewer acknowledges the efficiency of our method and interesting design choices of using a light-slab ray paramterization with 4D grid compression via decomposition. We address the feedback provided by the reviewer in the following.** *** **Q1. About the compatibility of other ray parameterizations with grid-based representations, L51-52, 57.** **(a). Plücker representation.** Plücker representation lies in the projective 5D space, presenting challenges for discretization and grid-based learning. Even if we ignore the projective nature, discretization results in a 5D ray space which (in both original and decomposed form) has a bigger storage cost as compared to its light-slab counterpart. Given the target devices are mobile, storage must be as limited as possible. **(b). Stratified ray representation.** Further, the stratified ray representation used by R2L [a] and MobileR2L [b] can be potentially discretized in two ways: (1) defining three new dimensions for every point sampled along the ray, which results in an extremely overparameterized and *redundant* ray space with unusable storage costs, and (2) querying a spatial 3D grid for points sampled along the ray. Multiple queries of the 3D grid per ray as done by NeRF-based counterparts *increases run-time per pixel*, prohibiting real-time inference on mobile devices. Alternatively, the light-slab ray-space grid is compact and allows a *single grid query per ray/pixel* conducive to real-time inference. Hence, we find light-slab paramterization to be most effective and others presenting issues towards grid-based learning. *** **Q2. Ablation on ray-space grid encoding.** We provide an ablation in Tab. E below on how the proposed Ray-Space Grid Encoder helps as compared to just using the light-slab representation with a traditional frequency encoder. For the purpose of this ablation, we train LightSpeed with grid-encoder and frequency encoders for 200k iterations with different network sizes and compare results on a full-resolution 800X800 Lego sub-scene from Synthetic $360^\circ$ dataset. Further, we show the training dynamics for all the trained variants in Fig. 2 of rebuttal PDF (red and green plots). As claimed, our approach offers better visual fidelity and training dynamics (iterations to reach a target PSNR) for both computationlly cheaper small networks as well as full sized networks. >Table E: **Effect of using a Grid Encoder**: We demonstrate the effect of using a grid-based LightSpeed by comparing with a frequency encoded variant (no grid). L and W refer to network depth and width respectively. >| Method | PSNR $\uparrow$| >| :------------ | :-----------: | >| 15-L W-256 LS (PE) | 28.84 | >| 30-L W-256 LS (PE) | 30.63 | >| 60-L W-256 LS (PE) | 32.16 | >| 15-L W-256 LS (Grid) |30.37 | >| 30-L W-256 LS (Grid) |31.70 | >| 60-L W-256 LS (Grid) | 32.34| *** **Q3. Full-resolution ablation.** We show visual fidelity and on-device latency tradeoff at *full-resolution* in Tab. F below. We also report FLOP values as an indicator of computational resources required at run-time. LightSpeed maintains a significantly better tradeoff as compared to MobileR2L on full resolutions scenes as well. >Table F: **Full-Resolution Fidelity-Latency Tradeoff**: LightSpeed (LS) maintains a much better tradeoff than MobileR2L (MR2L). Benchmarking done on an iPhone 13 with full-resolution images. L is network depth, and W is network width. >| Method | PSNR $\uparrow$| Latency $\downarrow$| FLOPs $\downarrow$| >| :------------ | :-----------: | :-----------: | :-----------: | >| 15-L W-256 MR2L | 27.69 | 14.54 ms | 12626M | >| 30-L W-128 MR2L | 27.54 | 14.47 ms | 8950M | >| 30-L W-256 MR2L | 29.21 | 18.59 ms | 23112M | >| 60-L W-256 MR2L |30.34 | 22.65 ms | 42772M | >| 15-L W-256 LS | 30.37 | 14.94 ms | 12833M | >| 30-L W-128 LS | 30.13 | 14.86 ms | 9065M | >| 30-L W-256 LS | 31.70 | 20.35 ms | 23319M | >| 60-L W-256 LS | 32.34 | 26.47 ms | 42980M | *** **References:** [a] Wang, Huan, et al. "R2l: Distilling neural radiance field to neural light field for efficient novel view synthesis." ECCV. 2022. [b] Cao, Junli, et al. "Real-Time Neural Light Field on Mobile Devices." CVPR. 2023. --- Rebuttal 2: Comment: Dear Reviewer 8eAH, We sincerely thank you again for your thoughtful suggestions and valuable feedback to improve our work. We provide additional explanations to help clarify our work. As the deadline for open discussion is soon, we sincerely hope to use this opportunity to see if our responses are sufficient and if any concern remains. It would be our great pleasure if you would consider updating your review or score. Thanks again for your time. Best, Authors --- Rebuttal Comment 2.1: Comment: Thanks for providing the response! All my concerns are addressed. And thus I would like to raise my rating to "weak accept. --- Reply to Comment 2.1.1: Comment: Dear Reviewer 8eAH, Thank you so much for checking our responses and raising the score. It is our great pleasure to know our efforts have helped address your concerns! We appreciate your time and reviewing efforts to help improve our work. If you still have questions or concerns, we would sincerely like to know and will make the best of our efforts to resolve them within the open discussion period. Best, Authors
Summary: This paper introduces LightSpeed, which uses traditional 4D light-slab representation and merges the super-resolution network proposed by MobileR2L. LightSpeed uses the NeLF method and will be primarily implemented on mobile. Strengths: Originality: Utilize the overlooked method of 4D light-slab representation and merge the good method of saving memory and time from other NeLF methods (The super-resolution network proposed by MobileR2L). Extend the application scenes of light-slab representation to non-frontal scenes using the divide-and-conquer strategy. Quality: Greatly saves the storage compared with other methods and balances the reconstruction quality and storage well. Clarity: Clearly show the results and the advantages of LightSpeed. Significance: Advance the general application of the NeLF method on mobile. Weaknesses: Since LightSpeed was proposed to solve the reconstruction problem in mobile, this paper does not show enough data for different mobile platforms. This paper will be more convincing with more data in different chips or multi-platform. This paper only shows the examples of unbounded datasets in Fig.4, and more comparisons of other datasets should be shown to be more convincing. The data demonstrated now keeps me in doubt about the effects. Table 3 should be placed in section 4.2 rather than 4.3. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: I think the storage problem is not the most significant in the rendering problem in mobile. The mobile device can upload data to the cloud to solve this problem. RealityScan adopts this method and can obtain similar results based on NeRF method. So what is the most significant strength of LightSpeed for a user who can use cloud storage? Moreover, since MobileNeRF can achieve real-time manipulation, can LightSpeed achieve similar effects? Besides, demonstrating more data will make this paper more convincing, like experimenting on more different chips. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: please see weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We thank the reviewer for their positive feedback and valuable suggestions. We appreciate that the reviewer finds our work original, explained clearly and of significance towards light field methods for mobile devices. We address concerns via more visual and on-device results and hope our response demonstrates the strengths of our approach.** *** **Q1. Latency on different chipsets.** We compare the latency of our approach with MobileR2L on 6 different chipsets as shown in Tabs. D-1 and D-2 below. We obtain competitive latency numbers for our full-sized LightSpeed network, and much better latency for our 30-layered network (on all devices), which has better rendering fidelity than full-sized MobileR2L as shown in Tab. D-3 below. > Table D-1: **Rendering Latency Analysis on LLFF Scenes**: LightSpeed maintains a competitive rendering latency (ms) to prior works. >| Chip | MobileR2L| Ours | Ours (30-L) | >| :------------ | :-----------: | :-----------: | :-----------: | >| Apple A13 (Low-end) | 40.23 | 41.06 | 32.29| >| Apple A15(Low-end) | 18.04 | 19.05 | 15.28| >| Apple A15(High-end) | 16.48 | 17.68 | 15.03| >| Apple A16 | 17.84 | 18.15 | 14.83 | >| Apple M1 Pro | 17.65 | 17.08 | 13.86 | >| Snapdragon SM8450 | 39.14 | 45.65 | 32.89| >Table D-2: **Rendering Latency Analysis on Synthetic $360^\circ$ Scenes**: LightSpeed maintains a competitive rendering latency (ms) to prior works. >| Chip | MobileR2L| Ours | Ours (30-L) | >| :------------ | :-----------: | :-----------: | :-----------: | >| Apple A13 (Low-end) | 65.54 | 66.10 | 53.89| >| Apple A15(Low-end) | 26.21 | 27.10| 20.15 | >| Apple A15(High-end) | 22.65 | 26.47 | 20.35| >| Apple A16 | 25.98 | 26.44 | 20.46 | >| Apple M1 Pro | 27.37 | 27.14 | 20.13 | >| Snapdragon SM8450 | 40.86 | 41.26 | 33.87| >Table D-3: **Full-Resolution Fidelity-Latency Tradeoff**: LightSpeed (LS) maintains a much better tradeoff than MobileR2L (MR2L). Benchmarking done on an iPhone 13 with full-resolution images. L is network depth, and W is network width. >| Method | PSNR $\uparrow$| Latency $\downarrow$| FLOPs $\downarrow$| >| :------------ | :-----------: | :-----------: | :-----------: | >| 15-L W-256 MR2L | 27.69 | 14.54 ms | 12626M | >| 30-L W-128 MR2L | 27.54 | 14.47 ms | 8950M | >| 30-L W-256 MR2L | 29.21 | 18.59 ms | 23112M | >| 60-L W-256 MR2L |30.34 | 22.65 ms | 42772M | >| 15-L W-256 LS | 30.37 | 14.94 ms | 12833M | >| 30-L W-128 LS | 30.13 | 14.86 ms | 9065M | >| 30-L W-256 LS | 31.70 | 20.35 ms | 23319M | >| 60-L W-256 LS | 32.34 | 26.47 ms | 42980M | *** **Q2. Visual results.** We show all video results in the supplementary material and comparison results on 4 different scenes in Fig. 4 and Fig. 1 of the main paper and supplementary material respectively. We further share results on more scenes in Fig. 1 of the rebuttal PDF to strenghten our claims. As claimed, LightSpeed is able to capture fine-level details better than previous state-of-the-art MobileR2L [a]. Competitive on-device runtimes and significantly better visual fidelity-latency tradeoff further demonstrate the strengths of our work. *** **Q3. About the placement of the table.** Thank you for pointing this out. We will make sure Table 3 is placed in Sec. 4.2 in the revised paper. *** **Q4. Using cloud storage.** Uploading LightSpeed models to the cloud would add additional latency to the test-time view-synthesis process, destroying the purpose of using a real-time rendering framework like LightSpeed. To use cloud storage for unnloading models off the mobile devices, we would still want the models to be small enough for fast on-loading and off-loading (bigger models would introduce more overhead). Additionally, privacy concerns and monetary costs also arise if models are stored in the cloud. *** **Q5. Real-time scene manipulation.** Since light fields inherently do not model the scene geometry, scene manipulation is not currently possible with LightSpeed. There are possibilities for composing two light fields for scene manipulation, and we plan to explore this in future work. MobileNeRF uses explicit scene representation in the form of a mesh and hence can manipulate scenes. However, MobileNeRF does not run on mobile devices for all scenes: it runs out of memory for complex scenes and hence presents a crucial drawback by not supporting different kinds of scenes. On the contrary, LightSpeed does not suffer from any such drawbacks and can easily handle complex scenes as well within on-device computational limits. *** **References:** [a] Cao, Junli, et al. "Real-Time Neural Light Field on Mobile Devices." CVPR. 2023. --- Rebuttal Comment 1.1: Comment: Thanks for addressing our concerns. We will keep our original rating for this paper.
Summary: Real-time novel-view image synthesis on mobile devices is challenging due to limited computational power and storage. Volumetric rendering methods are unsuitable due to their high computational cost. The authors propose using the efficient light slab representation for learning a neural light field, which achieves better rendering quality and a favorable trade-off between quality and speed compared to prior light field methods. Strengths: The paper proposes to use the light slab representation for learning a neural light field, which has not been used significantly in the literature before. The proposed method using light slab presentation is shown to perform better than the SOTA while providing a computational advantage over the existing methods. Weaknesses: NA Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We thank the reviewer for their feedback and strong rating. As summarized in the review, we propose a novel real-time view-synthesis method that is based on light fields. We leverage previously overlooked 4D light-slab representation for easy discretization and grid-based representations for neural light field learning. Our grid-based neural light field obtains a significant boost in training speed and performs better than exsiting works. Our approach further provides a computational advantage over existing methods in the form of an excellent tradeoff between rendering fidelity and on-device latency, paving the way for easy deployment to mobile devices.**
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for their thoughtful comments and appreciate their findings that our novel method simplifies real-time novel view synthesis on mobile devices while performing better than exisiting works with high-quality and fast rendering even on mobile devices (bQpx, XtjZ, CSSm, BeXQ). Specifically, our method is `noteworthy` in using a `traditionally underutilized light-slab representation` (bQpx) to formulate a grid-based ray-space for light field learning. This yields a`more compact and efficient ray representation` (bQpx) and replaces `the commonly used Plücker coordinates` (8eAH). We also `compress the ray space using 2D planes for efficienty` (8eAH). Our method further overcomes the limitations of light-slab representation by extending it to non-frontal $360^\circ$ scenes using a divide-and-conquer strategy (bQpx,BeXQ,8eAH). Our study provides `an extensive evaluation` (CSSm), demonstrating `clear results and advantages` (BeXQ) over prior methods. Further, our paper has `clear and easily understandable presentation` (CSSm). We will incorporate all suggestions and sincerely hope this will help reviewers finalize their judgements. *** We address common questions from reviewers in the following and answer other questions in the individual responses. We also upload the author response PDF to include more comparison figures. **Q1. To bQpx, 8eAH. About comparison with SIGNET, NeuLF, and ProLiF.** We thank the reviewers for pointing out these works. We propose a novel method to utilize grid-based representations for light field learning. To this end, we find the light-slab representation to be compact and easy to discretize for learning a grid-based ray space. To limit storage requirements, we decompose the 4D ray-space into six 2D planes. Further, we propose a divide-and-conquer strategy to utilize the light-slab representation, originally designed for frontal scenes, for modelling non-frontal scenes. While all SIGNET, NeuLF, and ProLiFF use the light-slab representation, they differ significantly from LightSpeed. - SIGNET[a] explores a different problem of compressing a given light field using ultra spherical network input encoding, with no guarantee for photo-realistic view synthesis, while our method targets and *achieves* real-time photorealistic view synthesis, specifically on mobile devices. - NeuLF[b] fails to capture fine-level details of scenes due to the absence of any network input encoding and takes 70ms to render a 567x1008 frame on an RTX 3090. On the other hand, LightSpeed has no issues *learning fine-level detail*s, as shown in Fig. 4 in the main paper and *runs in real-time* on mobile devices. - ProLiF[c] uses volumetric rendering to generate pixel values prohibiting real-time inference speeds. In contrast, LightSpeed runs on mobile-devices in *real-tim*e. We will discuss these works in detail in the revised paper. *** **Q2. To bQpx, CSSm. About results on unbounded scenes.** The rendering fidelity of our approach (LightSpeed) is closely tied to the performance of the corresponding NeRF teacher. LightSpeed uses Instant NGP [d] teachers for both bounded and unbounded $360^\circ$ scenes to maintain experimental consistency. We would like to highlight that Instant NGP introduces the artifacts to unbounded scenes, which are carried forward to LightSpeed via the mined pseudo-data. We share some of the pseudo-data images from Instant-NGP in Fig. 3 of the rebuttal PDF. MipNeRF360 [e] specifically uses space contraction techniques to model the unbounded nature of the scene and deal with blurriness in the renderings. It further introduces a distortion-based regularizer to remove floater artifacts and prevent background collapse. The techniques introduced by MipNeRF360 tackle the same type of artifacts pointed out in the reviews. Hence, using MipNeRF360 teachers will mitigate both these issues and could boost the visual fidelity on unbounded scenes for LightSpeed. *** **References:** [a] Feng, Brandon Yushan, and Amitabh Varshney. "Signet: Efficient neural representation for light fields." ICCV. 2021. [b] Li, Zhong, et al. "Neulf: Efficient novel view synthesis with neural 4d light field." arXiv. 2021. [c] Wang, Peng, et al. "Progressively-connected Light Field Network for Efficient View Synthesis." arXiv. 2022. [d] Müller, Thomas, et al. "Instant neural graphics primitives with a multiresolution hash encoding." ToG. 2022. [e] Barron, Jonathan T., et al. "Mip-nerf 360: Unbounded anti-aliased neural radiance fields." CVPR. 2022. Pdf: /pdf/aef0ce1ceb692e49ec787f2a37e71f68b4c06c5e.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper introduces LightSpeed, a method aimed at simplifying real-time novel-view image synthesis on mobile devices, which typically face constraints related to computational power and storage. By adopting the traditionally underutilized 4D light-slab (two-plane) representation for learning a neural light field, LightSpeed offers a more compact and efficient ray representation. While the light-slab representation has its limitations, mainly being designed for frontal views, the paper demonstrates a way to extend it for non-frontal scenes using a divide-and-conquer strategy. Strengths: - The paper proposes a promising approach that integrates neural light field representation with grid representation. Weaknesses: - The Contribution Context: While the methodology primarily integrates existing techniques like the neural light field method and grid representation from k-plane[9] and tensorf[5], it lacks a direct comparison or detailed discussion on the light-slab representation versus the Plücker coordinate representation[26] for non-frontal scenes. Given that the Plücker coordinate representation might achieve similar results, an experimental comparison would provide a more definitive understanding of the actual performance improvements, if any, achieved by their proposed method. Although such an integrated approach has its merits, the overall performance improvement appears to be marginal without these comparative analyses. - Coverage of Related Work: The paper mentions another neural light field method using light-slab representation. However, other related and potentially influential works such as "Neulf: Efficient novel view synthesis with neural 4D light field, EGSR 2022" and "Signet: Efficient neural representation for light fields, ICCV 2021" have been overlooked. - Limited Results: The experimental results presented have some limitations. Notably, the occurrence of the 'jelly effect', particularly in unbounded scenes, implies that the proposed method could benefit from further optimization. Furthermore, the synthetic scene experiments were conducted at a resolution of 400 x 400, as highlighted in Figure 1 and the ablation study. It remains uncertain how the proposed method would perform at the dataset's original resolution of 800 x 800. This higher-resolution evaluation could provide a more thorough understanding of the method's capabilities. - Feasibility for Real-Time Rendering: The paper claims real-time rendering feasibility on mobile devices, but it does not provide sufficient evidence such as measured MACs or a real-time rendering video demonstration. The computational cost of the 30-layered decoder plus 6-plane feature query might prove too expensive for the intended mobile applications. Further investigations should be conducted to substantiate these claims. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: -In line 175, it is unclear how the authors decided on the locations for the two planes, P1 and P2. Could the authors provide clarification on their choice of plane locations? -It is also not mentioned whether positional encoding was utilized before feeding data into the network. Could the authors specify if this step was incorporated in their method? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The paper presents a noteworthy approach by merging neural light field representation with grid representation. However, it primarily rehashes existing methods, with limited novel contribution. Experimental results exhibit limitations, and concerns about the feasibility of real-time rendering on mobile devices persist. Therefore, considering these constraints, I suggest a borderline reject for this paper in its current state. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We thank ther reviewer for the valuable feedback. We appreciate that the reviewer finds our approach of integrating light fields with grid-based representations noteworty and promising. We address the concerns in the following. We hope our response can further demonstrate the strengths and real-time feasibility of our method.** *** **Q1. About issues for Plücker representation discretization.** Plücker representation lies in the projective 5D space, presenting challenges for discretization and grid-based learning. Even if we ignore the projective nature, discretization results in a 5D ray space which (in both original and decomposed form) has a bigger storage cost as compared to its light-slab counterpart. Given the target devices are mobile, storage shall be as limited as possible. *** **Q2. Comparison with Plücker representation.** Given the challenges of discretizing Plücker representation, we show a comparison between using positionally encoded Plücker coordinates and our grid-based light-slab approach in Tab. A below for different network sizes to demonstrate the effectiveness of our approach. We train all models for 200k iterations on one Lego sub-scene at the *full 800x800 resolution*. We also share training curves for the variants in question in Fig. 2 of rebuttal PDF (red and blue curves). As claimed, our integrated approach performs better in terms of training time and test-time visual fidelity for large and small models (having less computational costs) alike whereas the Plücker-based network shows a sharp decline in visual fidelity and increased training times to reach a target test PSNR as network size is reduced. >Table A: **Light-Slab Grid Representation vs. Plücker Coordinates:** We compare the light-slab based LightSpeed (LS) with a positionally encoded variant of the Plücker ray representation. >| Method | PSNR $\uparrow$| >| :------------ | :-----------: | >| 15-L W-256 Plücker | 28.65 | >| 30-L W-256 Plücker | 30.84 | >| 60-L W-256 Plücker | 32.14 | >| 15-L W-256 LS | 30.37 | >| 30-L W-256 LS | 31.70 | >| 60-L W-256 LS | 32.34| > *** **Q3. Full-resolution ablation.** Our evaluations in Tab. 1 of the main paper (cropped version as Tab. B below) are conducted at *full resolution*. We further show visual fidelity and on-device latency tradeoff at full-resolution in Tab. C below. LightSpeed maintains a significantly better tradeoff as compared to MobileR2L on full resolutions scenes as well. >Table B: **Quantitative Comparison**: on Forward Facing and Synthetic $360^\circ$ scenes. >| Method | Synthetic $360^\circ$ PSNR $\uparrow$| LLFF PSNR $\uparrow$ | >| :------------ | :-----------: | :-----------: | >| NeRF | 31.01 |26.50| >| NeRF-PyTorch | 30.92 |26.26| >| SNeRG | 30.38 |25.63| >| MobileNeRF | 30.90 |25.91| >| MobileR2L | 31.34 |26.15| >| LightSpeed (Ours) | **32.23** | **26.50**| >Table C: **Full-Resolution Fidelity-Latency Tradeoff**: LightSpeed (LS) maintains a much better tradeoff than MobileR2L (MR2L). Benchmarking done on an iPhone 13 with full-resolution images. L is network depth, and W is network width. >| Method | PSNR $\uparrow$| Latency $\downarrow$| FLOPs $\downarrow$| >| :------------ | :-----------: | :-----------: | :-----------: | >| 15-L W-256 MR2L | 27.69 | 14.54 ms | 12626M | >| 30-L W-128 MR2L | 27.54 | 14.47 ms | 8950M | >| 30-L W-256 MR2L | 29.21 | 18.59 ms | 23112M | >| 60-L W-256 MR2L |30.34 | 22.65 ms | 42772M | >| 15-L W-256 LS | 30.37 | 14.94 ms | 12833M | >| 30-L W-128 LS | 30.13 | 14.86 ms | 9065M | >| 30-L W-256 LS | 31.70 | 20.35 ms | 23319M | >| 60-L W-256 LS | 32.34 | 26.47 ms | 42980M | *** **Q4. On-device feasibility.** We report the FLOPs for our method and MobileR2L in Tab. C. We demonstrate *real-time latency numbers* obtained from benchmarking the LightSpeed framework *directly on mobile devices* using the Xcode from Apple. Real-time demo to be released with the code later. Specifically, using a 30-layered decoder plus a six-plane feature query has almost half the operations of full-sized network from MobileR2L, both of which run on mobile devices. *** **Q5. Location for planes P1 and P2.** We use the same NDC trick as leveraged by NeRF to project rays to the NDC space: project the scene's near plane to z = -1 and the plane at infinity to z = 1. We use these projected planes (z=+-1) as planes P1 and P2. *** **Q6. Use of positional encoding.** *No positional encoding* is utilized before feeding the grid encodings to the decoder network. Grid-based representations and positional encodings offer alternative ways to provide interpolation capabilities to the network, and hence, using one of them is sufficient. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the detailed rebuttal and additional experiments provided. The response has indeed addressed some of my initial concerns, though I still have noteworthy reservations about the paper. - **Light Slab Representation**: As also highlighted by Reviewer 8eAH, this parametrization has been explored in several pieces of literature. Please ensure you verify if the references are from arXiv preprints or are already published. A revision of the claims and a detailed discussion comparing to these methods is necessary. - **Regarding the Plucker representation**: The authors have responded by suggesting that a 5d space occupies more space than a 4d one. However, in the context of a 360-degree scene, the 4d representation would necessitate multiple light slab representations, which could also be space-consuming. While I appreciate the inclusion of a table that quantitatively compares the light slab representation, demonstrating its superiority, I feel that the advantage might be marginal. - **Concerning the real-time mobile rendering claim**: The title **"lightSpeed"** suggests an extremely efficient and low power consumption solution. However, according to the rebuttal, the configuration "60-L W-256 LS", which surpasses the quality of mobileR2L, achieves just under 40 fps on the iPhone 13 with an A15 chip. This might be even more challenging for Android devices with lesser computational capabilities. Moreover, the authors have mentioned providing a real-time demo only post the code release. Given the bold "lightSpeed" claim and considering NeurIPS's stature, I believe a video demo should be available for review prior to the review process conclusion. Could the authors provide a live demonstration video through an anonymous link? In conclusion, I believe the paper presents an interesting contribution. However, I'd urge the authors to revisit some of their claims and experimental evidence. My reservations, especially concerning the real-time rendering on mobile devices, remain. I am open to reconsidering my evaluation if further convincing evidence is presented. --- Rebuttal 2: Comment: Dear Reviewer bQpx, We sincerely thank you again for your thoughtful suggestions and valuable feedback to improve our work. We provide additional explanations to help clarify our work. As the deadline for open discussion is soon, we sincerely hope to use this opportunity to see if our responses are sufficient and if any concern remains. It would be our great pleasure if you would consider updating your review or score. Thanks again for your time. Best, Authors --- Rebuttal 3: Comment: Thank you for taking the time to check our response. We try our best to clarify the reservations you may have about the paper. *** **Q1. Light Slab Representation** We provide a detailed discussion of the related works pointed out by reviewer and Reviewer `8eAH` in Q1 of the common response to all the reviews (`Author Rebuttal by Authors`). We will ensure that a discussion is added to the revised paper along with proper references. We humbly think we have addressed the concerns for Reviewer `8eAH` in this regard. *** **Q2. Regarding the Plucker representation** We appreciate the reviewer's questions about the efficiency of a grid-based discretized Plucker method over our light-slab based method. We experimentally try to discretize the 5D Plucker space and model the ray-space using $5 \choose 2$ 2D feature grids using our framework. All our efforts towards this approach fail. The main issue we encounter is that removing the projective ambiguity requires that we fix one of the coordinates to a constant value. However, we encounter *degenerate cases* where the coordinate to be normalized becomes 0. The best performance with this approach is compared with our LightSpeed model in Tab. A. These experiments are conducted for 200k iterations on a Lego scene. The grid-based Plucker representation isn’t able to learn anything compared to our method. > Table A: **Grid-based Plucker Representation**: Plucker-grid representation fails to learn the scene compared to our method. >| Method | PSNR $\uparrow$ | >| :------------ | :-----------: | >| Plucker-Grid | 13.36 | >| Ours | 31.20 | We hypothesize that this poor performance stems from the projective nature of the Plucker representation that *hinders* the effective discretization of the corresponding ray space and hence grid-based learning. To our knowledge, we are unaware of any works that offer an efficient way to discretize a projective space. On the contrary, the light-slab representation is compact and offers an easy discretization of the *Eucledian* ray space enabling grid learning. *** **Q3. Concerning the real-time mobile rendering claim** (a) We would like to draw attention to the fact that all our latency numbers are actually computed on mobile devices themselves (Tab. 3 main paper) leaving *no room for infeasibility on mobile devices*. (b) We agree with the reviewer that our `60-L W-256 LS` achieves 40 FPS on an iPhone 13, and this might be challenging to run on devices with lesser computational capabilities. We would kindly like to point out that this is a competitive FPS as compared to prior works. Further, this is exactly where our method comes into play with our `30-L W-256 LS` variant in rebuttal (which *also surpasses the MobileR2L visual fidelity*), delivering *~50 FPS* with *almost half FLOPs* than both `60-L W-256 LS` and `60-L W-256 MR2L`. Additionally, an even lighter variant, `15-L W-256 LS` in rebuttal with ~3.3x fewer FLOPs than full-sized models delivers a similar visual fidelity as that of the full-sized `60-L W-256 MR2L`. (c\) To numerically support the claim of our method's real-time performance on Android phones, we show the *on-device latency* on the **Snapdragon SM8450 chip** used in various **Android devices**, including **Huawei Honor Magic 4, OnePlus 10 Pro, Oppo Find X5 Pro, vivo iQOO 9, and Xiaomi 12**. We obtain competitive latency numbers (Tab. B) for our full-sized LightSpeed network and *much lower latency for our 30-layered network* which has better rendering fidelity than full-sized MobileR2L as shown in Tab. C of the original rebuttal. > Table B: **Rendering Latency Analysis on various scenes**: LightSpeed maintains a competitive rendering latency (ms) to MobileR2L on the Snapdragon SM8450 chip. >| Scenes | MobileR2L| Ours | Ours (30-L) | >| :------------ | :-----------: | :-----------: | :-----------: | >| LLFF | 39.14 | 45.65 | 32.89| >| Synthetic $360^\circ$ | 40.86 | 41.26 | 33.87| (d) We are building a mobile application following MobileR2L for real-time demonstration that requires substantial software engineering efforts and time to build (please kindly notice that the code for the application from MobileR2L has not been released). We would be happy to share a real-time demonstration video; however, the application is not ready right now and given the *limited time remaining* for the discussion period, it is not feasible to build the on-device application on such a short notice. We are certain that the latency and FPS numbers we provide reflect the on-device performance of the method (we could provide CoreML benchmark reports for different chips in the anonymous link if the reviewer prefers). That being said, we would like to assure the reviewer that we will release a full-fledged demo once the mobile application is ready. *** We sincerely hope that our response answers your questions. It would be our great pleasure if you would consider updating your review or score. Best, Authors --- Rebuttal Comment 3.1: Comment: I want to express my gratitude to the author for their thorough response to the concerns I highlighted. I am confident that the forthcoming revisions will address the areas of related work and representation selection. Additionally, I acknowledge the integration of grid representation and light slab representation as a meaningful contribution. **My chief reservations lie in the paper's somewhat audacious claims, particularly the assertions of "Light Speed" and operability "on mobile device.** The detailed performance metrics on both Qualcomm and Apple chips are indeed valuable, but I firmly believe that a video demonstration is indispensable to validate such bold statements. My assessment would be more favorable with further empirical evidence, perhaps at this juncture a benchmark on CoreML as proposed by the author. I'm aware of the time constraints and would like to clarify that my 'borderline' rating is not an outright call for rejection. Instead, it's a nudge for the author to ensure these highlighted areas receive the necessary emphasis in the revised manuscript. --- Rebuttal 4: Comment: We thank the reviewer for their effort and the opportunity to further provide supporting evidence for our work. We share benchmarking images of LightSpeed CoreML packages done on Apple A15 and A16 chips as empirical proof of our method's real-time capabilities via an anonymous link shared to AC only (required by the NeurIPS PCs). We kindly ask the reviewer to get the link from AC. Best, Authors
null
null
null
null
null
null
On the Universal Approximation Properties of Deep Neural Networks using MAM Neurons
Reject
Summary: This manuscript proves universal approximation results for MAM neurons. MAM neurons are essentially ReLU neurons that operate on the sum of the maximum and the minimum of the weighted inputs, plus a bias. Previous work claims these neurons are useful for reducing the memory footprint of deep neural networks. Thus, in short, the paper proves how any real-valued continuous function defined on a compact set can be approximated to any arbitrary degree of precision by a network consisting primarily of MAM neurons (for instance in the sup norm). Strengths: The paper is relatively well-written and easy to understand. Universal approximation properties are important, and proving them is a key step when new activation functions are introduced. Weaknesses: There are two problems with this contribution. First, there are many universal approximation results in the literature already, and thus this contribution is perceived as incremental. Second, and more importantly, an even better result can easily be proved by adapting well-known techniques for providing simple proofs of universal approximation properties (the authors do not seem to be aware of the existence of such techniques).One only needs to slightly tweak the proof that is used for the case of ReLU neurons. In particular, it is easy to show that any continuous function from [0,1] to R, can be approximated by a neural network with a single linear output unit and two hidden layers of MAM neurons to any degree of precision \epsilon. To see this, note that f is *uniformly* continuous over [0,1]. Thus the [0,1] interval can be subdivided into small intervals of size \alpha, such that within any such interval f is contained "within a sleeve of thickness \epsilon". Now connect the input to all the MAM neurons in the first hidden layer with identical weights equal to 0.5. As a result, the max + min portion of the input of each MAM neuron in the first hidden layer is equal to the input x. Now select a sequence of (arithmetically) increasing biases so that: 1) the first hidden neuron is turned on if x in the first interval of size \alpha and all the other hidden neurons produce a zero; 2) the first and second neurons are turned on if x is in the second interval of size \alpha and all other hidden neurons produce a zero; etc. In other words, essentially code the value of x by the number of neurons that are turned on in the hidden layer. It is then easy to see how to design the following hidden layer and connect it to the single linear output neuron to obtain the desired approximation. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: The authors may consider providing a simpler proof, along the lines sketched above, for a potential submission to a lesser conference. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: The main limitation of the universal approximation results is that the hidden layers can be arbitrary large (depending on the size of \alpha in the proof sketched above). And thus in general the constructive proofs of these results are not practical. This is a well known limitation and the authors, to their credit, do mention it. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and their valuable feedback. While we are naturally disappointed by the outcome, we value the reviewer's expertise and the insights they provided. We answer here to the reviewer's comments, hoping they could change their idea about this work. Actually, we are aware of the literature on universal approximation by means of neural networks with ReLU activations, but we had to resort to a different approach to cope with the double requirement of i) considering multi-input networks ii) using MAM neurons in all layers but the last one. These are, in fact, the conditions presented in reference [1] of the paper in which one is able to leverage the features of the MAM neurons. Hence, the rules of our approximation game prevent the use of linear combination in all layers but the last one. Regrettably, the classical multi-input universal approximation results that are general enough to include ReLU nonlinearities, basically hinge on the model described, for example, in [Hornik "Approximation Capabilities of Multilayer Feedforward Networks", Neural Networks, 1992], i.e., on an input-output relationship of the kind $$ \sum_j y_j \Psi\left(\left\langle a_j,x\right\rangle+b_j \right) $$ where $x$ is the input vector, $\Psi$ is the non-linearity, $y_j$ are the coefficients of the output linear layer, $a_j$ is the $j$-th coefficient vector, $b_j$ is the $j$-th bias and $\left\langle \cdot,\cdot\right\rangle$ stands for the scalar product. MAC+ReLU Networks fit within this model if one aggregates, for example, three properly programmed ReLU to generate the "bounded and non-constant" function for which universal approximation can be proved (and it is easy to see that this aggregation of multiple ReLU profiles to yield a trapezoidal shape is exactly what we do in our first layer). Regrettably, if one wants to use MAM neurons in all but the last layer, then the scalar product cannot be used, and this spoils Hornik-like general results. Clearly, this is not perceived if sticking to only one input, as in the example given by the reviewer, which is indeed correct but cannot be easily generalized. In fact, even adopting the non-local strategy suggested by the reviewer, when going from one input to multiple inputs, one needs to provide to the final linear combination some building blocks that depend simultaneously on all the inputs. As the scalar product is out of the reach of a MAM neuron, this must be done *after* profiles depending separately on each input are computed, and thus in a subsequent layer. This is why a second layer is needed, and would be also needed when upscaling to multiple inputs the simple approach proposed by the reviewer if no pre-processing by a scalar product is allowed. This said, aggregating single input profiles into multi-input profiles using only MAM neurons (no MAC neurons allowed until the last layer) is not straightforward and, even if it were possible starting from the profiles suggested by the reviewers, the complexity of the second layer would be analogous. Overall, though tempting, the approach put forward by the reviewer loses its simplicity when tackling the multi-input case without being able to resort to a pre-processing of the input by means of a scalar product that is not available in our model. For these reasons, though we appreciate the clever suggestion from the reviewer we still think that the complexity we have highlighted and tackled in the paper cannot be easily circumvented and that implies the contribution cannot be seen as incremental. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response but I continue to view this contribution as incremental and thus leave my score unchanged. --- Reply to Comment 1.1.1: Comment: Though we cannot but thank the reviewer for the additional time spent on our paper, we would like to comment on the reply we received. The reviewer seems to think that our results can be derived somehow easily from existing theory and offered a one-input example that may seem to substantiate this. Yet, our rebuttal explains that the straighfroward construction proposed by the reviewer cannot be extended to more than one input and thus is of no help in proving our results in a simpler way. The explanation starts from the classical tool on which many universal approximation results are based and shows that one the required features (a linear combination of the neuron inputs *before* the nonlinearity) is not present in MAM architectures. Since the reviewer does not reply on this point we assume that the issue is cleared and thus it is recognized that classical universal approximation theorems cannot be extended in a straightforward way to prove our Theorem 1 and Theorem 2. Add to all this that our Theorem 2 provides universal approximation of differential features of the target that is commonly a separate result. In the light of all this we do believe that labeling our result as incremental is technically wrong and should not be accepted as the basis for a decision.
Summary: This paper demonstrates that the network can still maintain the universal approximation property after substituting the classical MAC hidden neurons of neural networks with the MAM neurons, which only rely on the maximum and minimum elements of the summation, allowing for more aggressive pruning. Specifically, the authors consider a network with two hidden MAM layers and two kinds of output layers. They show that the networks can achieve universal approximation capabilities under different norms for target functions with varying smoothness. The constructive proof of the first case utilizes a similar idea to the partition of unity, and the second one decomposes the whole domain and deals with the local behavior of the subnetworks involving the second hidden layer. Strengths: The authors provide a universal approximation result for a recently proposed hidden neuron for neural networks with two different output designs. The constructive proof introduces localized hyper-rectangles, which may inspire other step-function-like construction in some approximation problems. It’s also helpful to provide some intuitive illustrations of the construction. Weaknesses: While the result is somewhat interesting, it fails to provide more compelling evidence for the newly proposed MAM neurons. In the introduction, MAM neurons’ main advantage is that they can allow more aggressive pruning. But the theorems and the proof process seem to have no investigation about these properties. And there is no further discussion about the academic and practical potentials of MAM neurons, weakening its significance and attractiveness as well as this work. In Section 5, the authors claim that the theorem has no constraints on the layer width, which contradicts the conventional universal approximation properties. However, the proof introduces a parameter n that needs to be chosen sufficiently large (See line 158). Interestingly, this parameter seems to be related to the width or scale of the neural network according to its definition in line 141. Hence, there appears to be a potential contradiction in this context. There are several evident traces of incompleteness throughout the manuscript, such as lines 141-145, equation after line 170 and line 216, and the unfinished Conclusions section in lines 229-230, indicating that this paper was hastily written and has not undergone thorough revisions. The inadequate mathematical formatting in the manuscript has resulted in difficulties in comprehending the proof. Some mathematical notations used in this paper contradict commonly used notations in classical theories. The LaTeX formatting in the manuscript is not standardized. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: One critical issue with the inadequate mathematical formatting is that there is a lack of notation related to the width of the hidden layers in the network, resulting in significant ambiguity regarding the lengths of vectors and the ranges of summation throughout the manuscript, especially in the proof of Lemma 1. Some mathematical notations may need to be changed, such as the norm defined after line 81, which might be easily misread as the Sobolev norm, the L’(x) and L’’(x) in line 74 are confusing and could be mistaken for the derivative of the function L(x), and o( ) in line 202 contradicts the standard infinitesimal notation. There are instances where symbols are misused, such as \setminus in line 177, Eq. (6), fractional form in line 187, and o( ) in line 202, and unnecessary empty lines precede certain equations, just like Eq. 7, 9, 14, 16, 17. The authors give two theorems for different settings, each with various norms, smoothness, and output designs. However, the relationship between these two theorems is not clear. It would be better to show that one is a corollary of the other. Besides, I hope the authors could find more considerable advantages of this unique neuron through the theoretical investigation, which would strengthen the paper’s contributions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: The authors do discuss the limitations in Section 5. However, due to the issue with the parameter n (See Weaknesses), I think the discussion is insufficient. Besides, they mention that the theoretical results wouldn’t directly lead to an efficient approximation, so I wonder if there are any numerical experiments conducted based on the setting in this paper. Besides, how large is the gap between the theoretical setting and practical applications since this paper has many assumptions and hypotheses? Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and their valuable feedback. While we are naturally disappointed by the outcome, we value the reviewer's expertise and the insights they provided. We answer here to the reviewer's comments, hoping they could change their idea about this work. > While the result is somewhat interesting, it fails [...] weakening its significance and attractiveness as well as this work. As summarized in the paper, the aim of this study is to establish if a MAM-based network can function as a universal approximator and we did so by providing theorems and a rigorous proof process. This is an initial step of paramount importance, to gain a theoretical understanding of MAM neurons, paving the way for further exploration. While we agree with the reviewer that a theoretical understanding of other properties of this neuron (e.g., prunability) is of interest, we believe that undertaking such an investigation would constitute a distinct and subsequent step in our research. > In Section 5, [...] a potential contradiction in this context. The reviewer's observation is valid, as the parameter $n$ is directly related to the scale of the network. The link between the approximation guarantee and the number of neurons is there and depends on two quantities that are uniformly indicated with $\delta$ and $\ell$. We will make this link explicit. $\delta$ (Theorem 1, see proof of Theorem 1) and $\delta$ and $\ell$ (Theorem 2, see Lemma 4) control the quality of approximation, but also the number of neurons needed in the first layer and implicitly in the second one (see the specialization of the general network construction in lines 141-143 for Theorem 1 and 161-173 for Theorem 2). The reviewer is correct in claiming that the theorems impose no constraints on the layer width. However, our intention was to emphasize a limitation rather than presenting it as an advantage. In fact, we do not impose any constraint on the magnitude of $n$ and allow it to be sufficiently large, which has repercussions on the efficiency of the approximation. We will rephrase "Limitations" to provide a clearer explanation. > There are several evident traces of incompleteness [...] just like Eq. 7, 9, 14, 16, 17. We apologize for any confusion caused by our stylistic choices. However, after careful proofreading, we may guarantee that none of the portions of text indicated by the reviewer contains any error or is grammatically incomplete, and only some minor typos are contained in a few equations: * *lines 141-145*, *equation after line 170*, *fractional form in line 187*: to the best of our knowledge, nothing wrong here. * *equation after line 216*: there is a minor typo and it will be fixed. * *Conclusions section*: as in other papers of this kind (e.g., [Lu, Yulong; Lu, Jianfeng. A Universal Approximation Theorem of Deep Neural Networks for Expressing Probability Distributions, NeurIPS (2020)]), conclusion is concise and can be extended but neither unfinished nor incorrect. * *lack of notation related to the width of the hidden layers*: we explained this above. * *norm defined after line 81*: indeed, that should be the Sobolev norm. However, there is a typo that may have misled the reviewer. Precisely, the term $\frac{\partial\phi}{x_j}(x)$ should have been written as $\frac{\partial\phi}{\partial x_j}(x)$. * *L'(x) and L''(x) in line 74*: in this work, first or second derivatives are always indicated as a fraction of differentials whereas $\mathcal{L}'$ and $\mathcal{L}''$ are defined in line 74 as the MAM hidden layers. We will revise the notation to avoid this small chance for ambiguity. * *o( ) in line 202*: the function $o(\cdot)$ is clearly defined right after its use. However, to avoid ambiguities, we will change the notation. * *$\setminus$ in line 177*: to the best of our knowledge, the LaTeX symbol \setminus $\setminus$ is commonly used to indicate the difference between two sets as in line 177. * *unnecessary empty lines*: we will remove the space between the text and the equations. > The authors give two theorems [...] one is a corollary of the other. The theorems are not one the corollary of the other. In our view, they are complementary and address in a different way the threefold trade-off between the complexity of the network, the strength of the error norm, and the smoothness requirements on the target function. Theorem 1 guarantees the strongest uniform approximation of targets, that are only required to be continuous. However, it requires a more complicated last layer that needs normalization. Theorem 2 guarantees a looser approximation as its error norm is of the integral type and thus allows local deviation in vanishing-measure neighborhood. Yet, this weaker approximation, paired with a stronger smoothness assumption on the target, allows us to leverage a simpler, purely linear, last layer and to show that not only the value of the target can be reproduced but also its first-order differential behavior. We agree with the reviewer on the opportunity of clarifying this in the paper and we will do that if accepted. > I wonder if there are any numerical experiments conducted [...] this paper has many assumptions and hypotheses? We do not believe that our paper has many assumptions and hypotheses, as stated by the reviewer, since the only assumption is that the function to be approximated must be $\mathcal{C}^0$ for Theorem 1 and $\mathcal{C}^2$ for Theorem 2. Moreover, even if it is very uncommon in paper discussing the universal approximation property of neural networks, we did conduct numerical experiments to further verify the correctness of the theory and in the "Examples" section a numerical example is presented to enable the reader to visualize the difference between the two theorems. If the paper is accepted, we will provide more quantitative results about numerical experiments for different values of $n$ together with quantitative evaluations of the approximation error. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their response. However, as noted by Reviewer p1mV, there might be simple proofs of the theorems. In addition, I found there are some highly related existing works, such as [1,2,3] below, which are absent in your paper. For example, [1] shows that GroupSort architectures are universal approximators. As there is a straightforward connection between max/min and sort, I tend to think of your results as a minor improvement of those results. I agree with Reviewer p1mV that it is better for a lesser conference. [1] Anil, et al., Sorting Out Lipschitz Function Approximation, ICML, 2019 [2] Tanielian, et al., Approximating Lipschitz continuous functions with GroupSort neural networks, AISTATS 2021 [3] Bohang Zhang, et al., Rethinking Lipschitz Neural Networks for Certified L-infinity Robustness, NeurIPS 2022 --- Reply to Comment 1.1.1: Comment: Though we cannot but thank the reviewer for the additional time spent on our paper, we would like to point out two issues that in our view are fundamental. Due to time constraints, they are laid down in a very synthetic and dry style. 1) we replied to the first round comment by highlighting that 1.a) they were mainly concentrated on (mostly nonexistent) formal issues 2.a) the technical problems were only apparent and derived from a very fast scan of the paper that, for example, led the reviewer to think that two distinct theorems (different assumptions, different guarantees) were the corollary one of the other or complaining that no discussion on the size of the network was present when all (though this is not the main topic of the paper as clearly declared from the beginning) elements are implicit in the proofs at the end of the paper and a very light revision would be able to explicit them To these clarification we received no reply and thus we are led to think that all these points are cleared. Yet 2) in this second round, this reviewer borrows from another reviewer that our result could be simply derived from other architectures already presented and discussed. Three papers are indicated. [1] Anil, et al., Sorting Out Lipschitz Function Approximation, ICML, 2019 [2] Tanielian, et al., Approximating Lipschitz continuous functions with GroupSort neural networks, AISTATS 2021 [3] Bohang Zhang, et al., Rethinking Lipschitz Neural Networks for Certified L-infinity Robustness, NeurIPS 2022 Leaving alone the very serious procedural problem of changing first-round comments into completely different second-round comments borrowed from another reviewer, the technical point is completely void. In fact, GroupSort *ACTIVATIONS* surely have universal approximation properties and in this sense they are equivalent to MAM and many other architecturs, yet, they achieve this with a scheme that is structurally different from MAM. In fact, even a fast scan of our paper and of those mentioned by the reviewer immediately highlights that GroupSort acts on the array of pre-activations, i.e. on a set of linear combinations of the layer inputs. Such linear combinations are key in building the architecture. In MAM we do not have any linear combination. The Max and Min operators are applied to separate weigthed layer inputs and Max and Min provide the aggregating part of the computation, while the activation is the classical ReLU. This MACless structure is what makes MAM so good in pruning and cannot be avoided. For this elementary reason, it is not at all straightforward to reproduce MAM behaviour with GroupSort. Actually the very high-level intuitive relationship between "sorting" and "max" and "min" is not a mathematical proof and though the suggested papers suggest that GroupSort can be specialize in something called MaxMin, the latter has nothing to do with MAM. In fact, the very same reviever provides not even a sketch of the path leading to the alleged equivalence. Add to the fact that MAM universality cannot be simply derived from GroupSort universality the fact that our Theorem 2 provides universal approximation of differential features of the target. In the light of all this we do believe that labeling our result as incremental is technically wrong and should not be accepted as the basis for a decision.
Summary: This paper presents two universal approximation theorems for deep neural networks associated with a so-called Multiply-And-Max/min (MAM) activation function defined with the maximum and minimum of the input components and a bias constant. One is for uniform approximation and the other for approximation in the Sobolev space W^{1, p} with p\ge 1. Strengths: The presented universal approximation theorems are interesting and should have implications to show the power of pooling layers in deep neural networks. Studying simultaneous approximation in terms of the norm in the Sobolev space W^{1, p} should be able to explain the efficiency of some deep learning algorithms. These ideas are novel in my opinion. Weaknesses: Though the approximation theorems are novel to me, the paper has a few weak points and should be improved: 1. To demonstrate some theoretical advantages of the MAM activation function. This might be done with the pooling layers in deep neural networks. 2. To present rates of approximation. Quantitative estimates for approximation of functions in various function spaces are crucial in the approximation error estimate for generalization analysis of deep learning algorithms. 3. To give rigorous statements and proofs. In Lemma 1, the sentence "Let z be any ... layer." should be removed because the output z is constructed by (5) and is not an arbitrary output function. In its proof, "We assume ..." and "the output ... only one of the inputs" should be revised: the neurons are constructed by (7), not by assumption. In Lemma 4, P is not a constant. It is a quantity depending on \ell of the form constant + o(1/\ell). 4. To give fair credits to the existing literature. For example, the construction of trapezoid functions has a long history in the study of deep neural networks and can be found in the papers of Shaham-Cloninger-Coifman (2018), Chui-Lin-Zhang-Zhou (2020), and some others. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Some questions listed in the previous section should be answered. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Better theoretical results would improve the quality of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the time and valuable feedback provided by the reviewer. Below, we respond to the reviewer's comments, incorporating their insights to improve the quality of our work. > To demonstrate some theoretical advantages of the MAM activation function. This might be done with the pooling layers in deep neural networks. This is an interesting point, but as stated in the Introduction, this work is a first step in the definition of the theory behind MAM neurons. We decided to start with, from our point of view, the most important property that a neural network must guarantee, i.e. the universal approximation property. A theoretical demonstration about the advantages of MAM neurons presented in reference 1 of the paper will be the focus of a future work. Just as a note, Multiply-And-Max/min is a paradigm for a neuron and it is not an activation function (i.e., we use the ReLU activation function in our network). The idea of doing a comparison with pooling layers can be exploited. We may also highlight that even if the behavior of a MAM fully-connected layer may remind the one of a max-pooling layer, a MAM layer can be trained and the dimension of its output depends only on the number of neurons of the layer (i.e., the output could also contain more elements than the input). > To present rates of approximation. Quantitative estimates for approximation of functions in various function spaces are crucial in the approximation error estimate for generalization analysis of deep learning algorithms. We agree with the reviewer on the opportunity of adding info on the error estimate. We will do that in the final version of the paper, if accepted. Actually, error estimations are implicit in the proofs and depend on some key quantities: some bounding constant deriving from the smoothness assumptions we make on the target function, and an infinitesimal factor depending on the construction and thus, ultimately, on the number of neurons in the network layers. The estimate for Theorem 1 is implicit in the equation after line 158 and is substantially the modulus of continuity of the target function. The estimate for Theorem 2 is slightly more complicated and it has two parts. Due to the piecewise-linear nature of min and max functions, the scheme we adopt is clearly related to piecewise-linear interpolation that is used in disjoint hyperrectangular domains that can be made to fill the whole domain as well as in the regions between such domains, whose measure can be made to vanish not to contribute to the integral norm. Both terms depend on the features of the target function and on the construction of the network that administers the positioning and size of the hyperrectangular domains. In both cases we may apply classical error bound for piecewise-linear approximation that yields an estimate deriving from the statement of Lemma 4, in which one may relate the parameters $\delta$ and $\ell$ to the number of neurons in the network. > To give rigorous statements and proofs. In Lemma 1, the sentence "Let z be any ... layer." should be removed because the output z is constructed by (5) and is not an arbitrary output function. In its proof, "We assume ..." and "the output ... only one of the inputs" should be revised: the neurons are constructed by (7), not by assumption. In Lemma 4, P is not a constant. It is a quantity depending on $\ell$ of the form constant + o(1/$\ell$). We will coherently modify the sentence the reviewer has highlighted from Lemma 1. We also acknowledge that the proof of Lemma 4 is concise, leading the reader to mistakenly consider $P$ dependent on $\ell$. However, $P$ is actually a constant independent of $\ell$. To demonstrate it, we include below a comprehensive breakdown of all the intermediate steps starting from the equation at line 218 and leading to the conclusion of Lemma 4 and the definition of $P$. $$ \begin{aligned} E_{\omega} &\le \left\[ \left(\frac{1}{2}M_2N^2\ell^2\right)^p + \left(\frac{1}{2} M_2N\ell\right)^p \right](2\ell)^N + \left[M_3^p+(2M_1)^p\right\] \left\[(2\ell+2\delta)^N-(2\ell)^N\right\]\\\\ &= \left\[\left(\frac{1}{2}M_2N^2\ell\right)^p+\left(\frac{1}{2}M_2N\right)^p \right]\ell^p(2\ell)^{N}+\left[M_3^p+(2M_1)^p\right\] \left\[(2\ell+2\delta)^N-(2\ell)^N\right\] \end{aligned} $$ Since $\ell\le 1$, the term $\left(\frac{1}{2}M_2N^2\ell\right)^p+ \left(\frac{1}{2}M_2N\right)^p$ is bounded above by $P=\left(\frac{1}{2}M_2N^2\right)^p+ \left(\frac{1}{2}M_2N\right)^p$. Here, we can also define $Q = \left[M_3^p+(2M_1)^p\right]$ so that we obtain: $$ \begin{aligned} E_{\omega} &\le P \ell^p(2\ell)^{N} + Q \left\[(2\ell+2\delta)^N-(2\ell)^N\right\]\\\\ &= (2\ell+2\delta)^N \left\\{P \ell^p \frac{(2\ell)^{N}}{(2\ell+2\delta)^N}+ Q \left\[1-\frac{(2\ell)^N}{(2\ell + 2\delta)^N}\right\] \right\\}\\\\ &= (2\ell+2\delta)^N \left\\\{P \ell^p \frac{1}{(1 + \delta/\ell)^N} + Q \left\[1-\frac{1}{(1 + \delta/\ell)^N}\right\]\right\\\}\\\\ &= (2\ell+2\delta)^N \left\\\{P \ell^p \left\[1 - o(\delta/\ell) \right\] + Q o(\delta/\ell)\right\\\} \end{aligned} $$ with $o(\cdot) = 1 - 1/(1 + \cdot)^N$. Hence, $P$ is a constant that does not depend on $\ell$. We admit that the step where $P$ is defined as the upper bound of a term depending on $\ell$ was not intuitive and its lack may mislead the reader. For this reason, we will add it to the final version of the manuscript. > To give fair credits to the existing literature. For example, the construction of trapezoid functions has a long history in the study of deep neural networks and can be found in the papers of Shaham-Cloninger-Coifman (2018), Chui-Lin-Zhang-Zhou (2020), and some others. We are sorry for not mentioning these works and we thank you for the suggestion. We will be happy to add these references to the paper.
Summary: The paper studies the universal approximation properties of ReLU networks using the Multiply-And-Max/min (MAM) neurons. Literature on the universal approximation properties of ReLU networks using the Multiply-and-ACcumulate (MAC) neurons is vast. However, the study on MAM neurons seems lacking. Hence, two theorems taking a step in characterizing the universal approximation properties for MAM neurons are proved in this paper. The first theorem states that a two-hidden-layer ReLU network using MAM neurons in the first two layers and the normalized linear combination in the last layer can approximate any continuous function on a unit hypercube arbitrarily well in terms of the infinity norm. The second theorem is similar to the first one, stating that a two-hidden-layer ReLU network using MAM neurons in the first two layers and the linear combination in the last layer can approximate any twice continuously differentiable function on a unit hypercube arbitrarily well in terms of the Sobolev norm. The proofs of these two theorems are constructive and the authors also acknowledge that their results do not imply efficient approximation. Strengths: The novelty is clear. The novel contribution of this paper is apparently the theoretical guarantees on the universal approximation properties of ReLU networks using MAM neurons. Although I have not carefully validated the proof, the explanations and statements given in the paper seem to be sufficiently convincing. Overall, this is a well-written paper. The presentation is concise, clear, and easy to follow. I enjoy reading the paper. Weaknesses: 1. The requirement of the target function being twice continuously differentiable seems a bit limited. It would be great if the authors could relax this assumption or clarify why this assumption is necessary. 2. The result in Theorem 2 relies on the L^p Sobolev norm. Would it be possible to extend the result to the infinity norm? The paper would be more convincing and clearer if the authors can justify why the normalized linear combination and the linear combination use different norms. The connection between Theorem 1 and 2 seems missing. 3. Given the observation that using MAM or the mixed MAM/MAC neurons gives better pruning performance than the MAC neurons in practice, the paper would be more convincing if the authors can provide some insights into the constructive approximation of these different schemes. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Line 81-82: Is the second term a partial derivative of phi with respect to x_j? There is a missing partial symbol in the denominator. Line 112: The authors mention “...weakly unimodal piecewise-linear functions…” What does the word “weakly” mean here? Can we use unimodal piecewise-linear functions? The two theorems provided in the paper rely on the assumption of using a two-hidden-layer network. I think it may be trivial to extend the results to multiple layers. Can we extend the results to a single-hidden-layer network? If not, would it be possible to prove it? It would be nice to give some insights into this. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors clearly state the limitations of their work in Section 5. Specifically, they point out that their results do not imply efficient implementation. I’m glad to see they make it very clear. They also state that the efficiency of approximation will be their future focus. I think this paper has laid a good foundation for their future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the time and valuable feedback provided by the reviewer. Below, we respond to the reviewer's comments, incorporating their insights to improve the quality of our work. > The requirement of the target [...] clarify why this assumption is necessary. We can clarify the motivation behind this assumption. The $\mathcal{C}^2$ assumption is used to leverage the Taylor expansion of the target function in equations (14) and (15). To prove the approximation of first-order derivative we need an expansion up to the same order with a remainder that can be bounded and pushed to zero by narrowing the neighborhood. This could be achieved locally by assuming that $f$ is only $\mathcal{C}^1$ if one gives up a well defined form for the remainder. This is surely possible, but i) it spoils the uniformity of the remainder over the whole domain and requires a more sophisticated bounding and ii) prevents the possibility of giving an easy error estimation depending on the size of the network. Since another reviewer asked for ii) we would keep the $\mathcal{C}^2$ assumption to be able to provide this insight. > The result in Theorem 2 relies [...] Theorem 1 and 2 seems missing. In our view, the theorems are complementary and address in a different way the threefold trade-off between the complexity of the network, the strength of the error norm, and the smoothness requirements on the target function. Theorem 1 guarantees the strongest uniform approximation of targets, that are only required to be continuous. However, it requires a more complicated last layer that needs normalization. Theorem 2 guarantees a looser approximation as its error norm is of the integral type and thus allows local deviation in vanishing-measure neighborhood. This weaker approximation, paired with a stronger smoothness assumption, allows us to leverage a simpler linear last layer and to show that both the value and its first-order differential behaviour can be reproduced. Despite potential smarter constructions, the transition from Sobolev to infinity norm is not a straightforward step. If we renounce to the normalized layer, we cannot ensure that the linear combination of the functions $z$ remains constant. To address this, we use a construction that deals with dimensionality effects and leaves certain areas of the domain uncovered. If the reviewer is interested, we would be glad to engage later in a further discussion on this topic. > Given the observation [...] different schemes. While this is an interesting observation, at present, we do not possess a theoretical foundation to clarify why neural networks based on MAM exhibit superior pruning performance. Nevertheless, there is empirical evidence of this, as documented in reference 1 and 2 of the paper. These studies show that, while maintaining a certain level of accuracy, using MAM neurons significantly increases the pruned connections compared to traditional MAC-only networks. The initial phase of our investigation involves examining whether a MAM-based network can function as a universal approximator. Thus, the primary objective of this work is to establish the universal approximation property for MAM networks. Addressing the pruning capability would necessitate a separate investigation and will be a subject of future work. > The authors mention “...weakly unimodal [...] unimodal piecewise-linear functions? The whole construction in the two theorems hinges on the availability of some basic building blocks that i) are functions of all the inputs, ii) are piecewise linear and can be obtained by MAM operations, iii) feature a maximum and are at least non-increasing in all directions departing from that maximum. The word *weakly* is used to signify the *non-increasing* profile away from the maximum and is needed as in many steps of our construction single-maximum functions would not fit. > The two theorems provided [...] some insights into this. We agree with the reviewer that extending our Theorems to more than two hidden layers would be trivial ad it depends only on the ability of encoding the identity in a MAM layer. Yet, though we do not have any formal proof yet, we believe that the same guarantee cannot be given for one-hidden-layer networks. Our assumption is supported from the classical model that is adopted for multi-input, one-layer networks that can be synthesized into an input-output relationship of the kind (see [Hornik “Approximation Capabilities of Multilayer Feedforward Networks”, Neural Networks, 1992]) $$ \sum_j y_j \Psi\left(\left\langle a_j,x\right\rangle+b_j \right) $$ where $x$ is the input vector, $\Psi$ is the non-linearity, $y_j$ are the coefficients of the output linear layer, $a_j$ is the $j$-th coefficient vector, $b_j$ is the $j$-th bias and $\left\langle \cdot,\cdot\right\rangle$ stands for the scalar product. Regrettably, if one wants to use MAM neurons in all but the last layer, then the scalar product cannot be used. As the scalar product is out of the reach of a MAM neuron, the aggregation of the contributions coming from the processing of each input (equation (8) in the paper) must be done *after* some quantities depending separately on each input are computed in the first layer (equation (7) in the paper), and thus in a second hidden layer. Actually, a deeper analysis of equation (7) in the paper reveals that the MAM neurons in the first layer are used well below their potential computational capabilities: they basically implement a scaled ReLU with a variable bias. Hence, the construction uses only "*half*" of the first layer though we have to declare that two layers are involved. In the light of this we believe that reducing the number of hidden layers from 2 (one may even say 1.5 layers since the functionality of the neurons of the first layer are not fully exploited) to 1 is not a straightforward step though surely one worth analyzing in more detail to at least provide a formal negative result. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their responses. My concerns are addressed and I will keep my rating unchanged. However, the concern raised by Reviewer Ar7s is critical. I also agree that there is an apparent connection between sorting and max/min. Also, I found another related work that is missing in the manuscript [1]. Since any continuous function on a compact subset of $\mathbb{R}^n$ can be approximated by a continuous piecewise linear (CPWL) function and any CPWL function has a max-min representation, i.e., $p(x)=\max_{\mathcal{X}\in Q}\min_{i\in \mathcal{A}(\mathcal{X})}f_i(x)$, it is important to point out that the MAM neurons are a natural choice to approximating any CPWL function. In a very recent NeurIPS 2022 paper [1], it was proved that using a max-min representation can lead to better upper bounds on the number of neurons for representing CPWL functions. From this, the universal approximation property of MAM neurons then seems to be trivial. [1] Chen, Kuan-Lin, Harinath Garudadri, and Bhaskar D. Rao. "Improved bounds on neural complexity for representing piecewise linear functions." Advances in Neural Information Processing Systems 35 (2022): 7167-7180. --- Reply to Comment 1.1.1: Comment: Thank you very much for appreciating our replies. Though the time window for interaction is closing, we would like to clarify that the concern of Ar7s is actually void. Regrettably, to realize why, it is necessary to overcome the, only apparent, analogy between some of the architectures already proposed that exploit max and min and our MAM neuron. As a matter of fact, the paper was probably not clear enough on this point and regrettably this led to a big misunderstanding. Starting from the paper that you suggest (the same happens in all other papers that have been mentioned as possible prior-art from which our result would *easily* follow) and considering, for example, eq 13, it is clear that, before min and max are applied, some functions $f_i$ have to be computed. Such functions are affine functions from R^N (N being the number of inputs) to R and thus are some kind of biased linear combination of the inputs. This does not happen in MAM structures that do not have any linear combination before min and max. More in neural terms and intuitively speaking, multi-input neurons are made of two stages. The first one weights and aggregates the inputs, the second decides the activation and thus the output of the neuron based on the aggregated pre-activation (possibly on the aggregated pre-activations of the whole layer as in the GroupSort case). All the examples and the literature that have been pointed out, play with the activation part (this is explicit when speaking of GroupSort but structurally the same for eq. 13 in [1]) and not with the aggregation part that, instead, is the one on which MAM focuses. This happens as MAM has been designed for pruning, a task for which is fundamental to assess the importance of each input before it is aggregated, an importance that is partially lost when applying a linear combination and considering only its result from there on. Regrettably, the unavailability of a linear combinatiom but in the last layer, prevents MAM networks from the possibility of tackling the problem of multi-dimensional input as the multi-dimensional profiles must be built by a suitable composition of nonlinear beahviours. This is actually what our second layer does. Clearly, if one addresses only one-input networks, such a complexity does not appear and this is what misled reviewer p1mV into believing that our result can be derived trivially from existing theory. It is not true for more than one input, i.e., in all the interesting cases. Overall, though we admit that explicitly mentioning in the paper some loosely related literature with suggestively similar title would have avoided all this, we are very sorry to see that this miscomprehension is heavily and negatively biasing the whole review process.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Stable Nonconvex-Nonconcave Training via Linear Interpolation
Accept (spotlight)
Summary: This paper applies linear interpolation to make neural networks stable. Based on the analysis of instabilities, the authors propose a new optimization scheme, RAPP. RAPP achieves last-iterate convergence rates for the full range of cohypomonotone problems. Moreover, by replacing the inner optimizer in RAPP, the authors propose a new Lookahead algorithm, which is helpful for expanding cohypomonotone problems. The authors also do experiments to verify that linear interpolation is beneficial to RAPP and Lookahead. Strengths: This paper applies linear interpolation to make neural networks stable. Based on the analysis of instabilities, the authors propose a new optimization scheme, RAPP. RAPP achieves last-iterate convergence rates for the full range of cohypomonotone problems. Moreover, by replacing the inner optimizer in RAPP, the authors propose a new Lookahead algorithm, which is helpful for expanding cohypomonotone problems. The authors also do experiments to verify that linear interpolation is beneficial to RAPP and Lookahead. Weaknesses: 1. As illustrated in definition 4.1, formulation (4) is incorrect, because $z$ and $z^’$ have different dimensions. 2. In Theorem 5.1, the authors claim that Algorithm 1 converges to a neighborhood linearly, but, according to formulation (6), the upper bound of the neighborhood is $\infty$ when $\gamma\to 1/L$. This bound is too loose. 3. The authors didn’t mention linear interpolation in their theorems, but they add linear interpolation in their experiments. The theorems about linear interpolation are lacking. 4. The authors propose an optimization scheme RAPP and compare it to Adam. Then, the comparison with other optimization schemes lacked, such as Momentum, RMSprop, Adadelta, and so on. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. What’s the meaning of “id”? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: See the weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback and address all remaining concerns below: - **Linear convergence** when we have $\|w^{\tau+1}-w^\star\| \leq c \|w^\tau-w^\star\|$ for $c \in (0,1)$ it is standard to refer to it as linear convergence, but we will add a remark clarifying that the rate suffers if $\gamma$ is too close to $1/L$. Unfortunately this dependency is unavoidable and it also appears even for average iterate results of extragradient based methods (through e.g. $\kappa$ in Cor. 3.2 of [Pethick et al. 2021](https://openreview.net/pdf?id=2_vhkAMARk)). - **Missing theory for linear interpolation** We think there might be a misunderstanding. The methods RAPP, IKM, EG+ and Lookahead are all using linear interpolation and are analysed in Lemma 6.1, Theorem 6.2, Corollary 6.4 and the theorem statements in Section 7. So the theoretical results covers the linear interpolation used in the algorithms. - **Additional baseline** As suggested we additionally tried SGD with momentum. We started with the the default of 0.9, but ended up consecutively decreasing the momentum parameter due to instability. As can be inferred from the table below, the only stable run we obtained was for momentum=0.1, for which the FID was still worse than for SGD without momentum (see Table 3 in the paper). | momentum | 0.9 | 0.5 | 0.1 | |----| -------- | -------- | -------- | |FID | 319.52 | 153.39 | 20.90 | Note that in the paper we already include comparison with both Adam and SGD. Minor comments: - **Typo in equation (4)** That is indeed a typo – the $n$ should have been $d$ (the correct definition can be found in Definition B.1 of the appendix). Thanks for catching it. - **Meaning of “id”** It refers to the identity mapping. We will add the definition to Appendix B where the rest of the definitions can be found. --- Rebuttal Comment 1.1: Comment: Thanks for your reply. To be honest, your paper is difficult to follow. Other reviewers also lack confidence in your paper due to their lower confidence rate. Moreover, according to the definition of linear convergence rate in Wikipedia, your answers are unsatisfactory. When $\gamma\to 1/L$, the second term of (6) becomes $\infty$. And the definition of linear convergence rate doesn't include the second term. In addition, the baselines are still few. In summary, I keep my rating. --- Reply to Comment 1.1.1: Comment: Thank you for following up on the rebuttal! It is true that the _size of the neighborhood_ that we converge to linearly increases with $\gamma$, but in the deterministic case $\sigma$ is zero so we converge exactly. In the stochastic case (treated in the appendix) we control the term through $\sigma$. Concerning the baselines we agree that more comparisons never hurts, but we had to draw the line somewhere due to computational constraints. Each run takes 30 hours and we average over 5 independent runs for each configuration due to high variance common to GAN training. With that said, we do compare with 4 strong baselines: GDA, EG, ExtraAdam and Adam (which amounts to almost a month of compute just for the baselines).
Summary: The paper gives a theoretical analysis of linear interpolation that can help stabilize neural network training. They show these instabilities in the optimization are caused by nonmonotonicity in the loss landscape. They also construct a new optimization scheme, called "relaxed approximate proximal point" (RAPP) which is the first explicit method to obtain last iterate convergence rates for cohypomonotone problems. They also show that the extragradient+ algorith, RAPP, and the Lookahead based algorithms are all instances of KM iteration. They show experiments on synthetic problems and GANs. Strengths: - Gives a nice summary of the problem setting and also clearly states what the paper is trying to resolve. - Resolves questions that were implied by the literature, and thus would be interesting to other researchers. - Gave nice proofs that generalize a number of methods as instances of the (inexact) KM iteration. Weaknesses: - An analysis of the wallclock time for the algorithms would have been nice. - It would have been nice to see experiments with different hyperparameters for the Adam optimizer, such as different learning rates or $\beta$ parameters. - In the abstract: I'm not sure I am convinced that "linear interpolation as a key method for stabilizing (large-scale) neural network training." The experiments haven't convinced me of this. I would probably rephrase it as "helping stabilize" or something similar. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - How did the wallclock time of Adam compare with the various Lookahead and Extragradient algorithms? - I know the authors mention that the point of the GAN experiment is not to beat the state-of-the-art, but are there any implications of these algorithms for achieving state-of-the-art performance? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: - The authors don't really explicitly state their limitations, except perhaps to not compare their GAN experiment to the state-of-the-art. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback and address all remaining concerns below: - **Wallclock time** The wallclock time is essentially made to be the same across all methods in the experiments by providing each method with the same number of gradient computations (see line 263-267). All the methods (Adam, SGD, Lookahead, extragradient methods and RAPP) carry out the same operator repeatedly: addition of a gradient and an iterate. The only (crucial) difference is *what* gradient and iterate are involved. So all the methods are provided with the same computational budget. - **Adam hyperparameters** We note that we have used the *optimized* hyperparameters of Adam from Chavdarova et al. (2020) (see [below Table 10](https://arxiv.org/pdf/2006.14567.pdf)). - **Implication for state-of-the-art** It is definitely interesting to try RAPP at larger scale, especially in transformer-based setups where tricks like gradient clipping are otherwise used for preventing divergence (see e.g. [StyleSwin](https://openaccess.thecvf.com/content/CVPR2022/papers/Zhang_StyleSwin_Transformer-Based_GAN_for_High-Resolution_Image_Generation_CVPR_2022_paper.pdf) and the associated code). - **Abstract** We will rephrased the abstract as suggested --- Rebuttal Comment 1.1: Comment: Thanks for your reply. It definitely cleared up some questions. I'll keep my rating.
Summary: This paper studies the global convergence problem under cohypomonotonicity structural assumption. The authors prove the global convergence rate for the last iterate of their proposed algorithm RAPP. RAPP is the first explicit scheme to 58 have non-asymptotic guarantees for the full range of cohypomonotone problems. Strengths: 1. This paper proves global convergence rates for the last iterate of our proposed algorithm RAPP which addi57 tionally handles constrained and regularized settings. 2. By replacing the inner optimization routine in RAPP with GDA and EG, this paper rediscovers the Lookahead algorithms. Guarantees for the Lookahead variants by deriving nonexpansive properties of the base optimizers are also obtained. 3. Their algorithm is tested experimentally. Weaknesses: 1. They have only dealt with compositions of operators. 2. Whether RAPP could be extended to more general cases is not clear. Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Listed in Weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback. We agree that it is definitely interesting to see if we can extend the results further and we indeed have some preliminary positive results going beyond compositions. --- Rebuttal Comment 1.1: Comment: Thanks for the reply. I will keep my rating.
Summary: This paper continues a line of work motivated by the need to design algorithms for non-convex non-concave min-max problems. Such problems arise in the training of GANs as well as reinforcement learning via self-play. Since solving general non-convex non-concave min-max problems is intractable, the main approach in this line of work is to study the subclass of problems for which the gradient map (in the constrained case one additionally adds a map corresponding to the normal vectors of the constraints) is a cohypomonotone operator. Even for problems in this subclass previously developed algorithms can be shown to diverge. The main contribution of this paper is a unified way of designing algorithms for finding zeros of cohhypomonotone operators in terms of the KM iteration from the theory of non-monotone operators. This leads to several new results including (1) a new algorithm RAPP which handles constraints and has a convergence rate independent of the cohypomonotonicity parameter $\rho$, and (2) a simple analysis in terms of the KM iteration for the Lookahead algorithm, which additionally provides theoretical justification for the empirically most useful setting of the main hyperparameter for Lookahead. Strengths: 1. The paper provides both a new algorithm that provably converges for a broader class of problems (i.e. those with constraints and no dependence on the cohypomontonicity parameter) than previously known. 2. The paper gives a simple and intuitive general framework for analyzing optimization algorithms for cohypomonotone operators, which gives additional insight into previously discovered algorithms (in particular Lookahead). 3. There are also experimental results illustrating the convergence advantage of such algorithms, including in the setting of large-scale GAN training setting. Weaknesses: The paper is a little bit short on exposition explaining why finding the zeros of cohypomontone operators is important. Of course this strictly generalizes solving convex-concave min-max problems, but exactly how much it generalizes is not stated. In particular, Example 3.1 gives an example of how the operator arises from a min-max problem, but does not explain when/why this operator will satisfy the conditions of Assumption 3.2. While the empirical results in the paper justify that there are practical min-max where the proposed algorithms converge while other approaches diverge, it would be nice to have a simple analytic example to illustrate more clearly what types of min-max problems give rise to cohypomonotone operators. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: 1. Is there a simple example of a non-concave-non-convex min-max problem so that it is easy to compute by hand that the associated operator is cohypomonotone? 2. Is there a (even hand-wavy) theoretical reason to expect that cohypomonotonicity is a good fit for modelling large-scale min-max problems arising for GANs? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback and address all remaining concerns below: **Generality of cohypomonotonicity and simple examples** One of the simplest examples is probably Example H.2 in the appendix where we also provide the closed-form solution to $\rho$ and $L$. Another simple condition is the one proposed in [Example 1, Lee & Kim 2021](https://arxiv.org/pdf/2106.02326.pdf) which shows that a certain problem class is cohypomonotone when a second order condition is satisfied (known as the interaction dominant condition). Additionally, all the results extend to weak MVIs apart from the last iterate result of RAPP (see Appendix B.1 where we discuss this). So other examples includes the two-agent zero-sum reinforcement learning problem mentioned in [Diakonikolas et al. 2021](https://arxiv.org/pdf/2011.00364.pdf), all quasi-convex-concave and all star-convex-concave problems. Note that for the last iterate result we only assume that cohypomonotonicity holds for all pairs of points for simplicity. We really only need the condition to hold for each *consecutive pair of points* of the iterates, so we could expect this to hold even in many weak MVIs. **Intuition behind applying to GANs** Our analysis primarily relies on application of cohypomonotonicity between a given iterate $z^k$ and some solution $z^\star$. Let us for simplicity consider the unconstrained case where the condition reduces to $\langle Fz^k, z^k-z^\star \rangle \geq \rho\|Fz^k\|^2$ This allows the gradients to point away from the solution set (when $\rho$ is negative), which is generally the case in nonconvex landscapes. The slack provided on the angle is particularly large when the norm of the gradient is large, which is usually the case in the beginning of training. In general the slack can be larger when the stepsize is large through the relationship $\rho > -2/\gamma$ (large stepsizes of e.g. 0.1 is what is used in practice). It is definitely interesting to see if the condition can be relaxed further. Another motivation for us was the instability at the solutions illustrated in [Figure 5, Berard 2020](https://arxiv.org/pdf/1906.04848.pdf) by inspection of the Hessian/Jacobian. Linear interpolation can be seen as a way to stabilize the dynamics also locally. We will comment on this in the final version. --- Rebuttal Comment 1.1: Comment: Thanks for your response clarifying the examples of cohypomonotonicity and the application to GANs. I will keep my score.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Sharp Recovery Thresholds of Tensor PCA Spectral Algorithms
Accept (poster)
Summary: This paper considers the (Gaussian, nonsymmetric, rank-1) spiked tensor model introduced in Montanari and Richard. It gives sharp thresholds for various matricization techniques (unfolding, partial traces, successive contraction). The statements about random tensors are reduced to known results about left and right singular vectors of spiked non-symmetric random matrices. The proofs of the new statements are a few lines long. Some numerical results back up the performance statement about successive contraction. Strengths: The presentation is generally clear and concise. The proofs seem reasonably detailed given the content matter. Some of the results appear novel and improve over substantially longer published work. Highlights include 1) Improving an analysis in Hopkins et al. 2) Generalizing the successive contraction method of Montanari and Richard with excellent performance properties. Weaknesses: The new theoretical content in this paper is very slight: the random matrix theory of spiked Gaussian matrices is extremely well-developed. Once it is noticed that the unfolded tensors satisfy the hypotheses of these theorems, the proof is immediate. There is no further elaboration beyond this: 1) the paper does not show how this fits into a larger picture of non-random-matrix-theory approaches. 2) the paper does not attempt to expand interesting unexplored aspects which don't follow from textbook random matrix theory (the best hint of this are the numerics that appear for tau=1 at the end). 3) the new technique (in the abstract "Finally, we introduce an iterative algorithm based on successive contractions of the tensor") is rather a generalization of a technique of Montanari and Richard. The analysis given is in the phase of perfect recovery, exploiting some of the specific structure of the spiked model. This is a reasonable topic for a purely theoretical NeurIPS contribution, but there is not enough new material here to warrant publication. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: No questions. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 1 poor Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
null
Summary: This paper studies recovery of planted signals from noisy tensors. The model is the prototypical tensor PCA and the emphasis is that (i) the planted low rank tensors (in particular their dimensions) can be different along different modes; (ii) the joint scaling of different dimensions can deviate from the classical proportional regime. The paper studies a few old and new natural algorithms and obtain sharp performance guarantees (specifically the overlap) for almost all of them. Strengths: Both the techniques and the results of this paper are neat. It properly combines recent (and some not-so-recent) results in random matrix theory (RMT) and high-dimensional statistics and extend them to a satisfactory extent. Once some prior results are properly organized, the proofs of the present results become rather straightforward. But for ICML, I think this is enough. Also, I thought about a subset of the questions studied in this paper before (perhaps before some of the recent RMT results were available) and at that time it was not clear how these results can be proved. Weaknesses: One important aspect of tensor PCA that the authors seem to ignore by large is the potential existence of information-computation gap. The authors may want to check papers of Sam Hopkins, David Steurer, Andrea Montanari, Guy Bresler, Alex Wein (listed in random order) and many others. They provide evidence of average case hardness from the perspectives of SoS, low-degree polynomials, reduction, statistical query lower bounds, AMP, statistical physics, etc. I know the information-theoretic aspect and the info-comp gap is not the focus of this paper. But when it comes to such a prototypical problem of tensor PCA, it's always good to say a few words about it so that the present results are properly positioned in the literature. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In equation (1.3), should (r) in the superscript be (i)? 2. Line 58 'Where we estimate v_j^{(1)} by the first right singular vector...'. Here by 'first', I suppose lambda1 >= lambda2 >= ... is assumed. Am I right? 3. The set {v_j(k)} is assumed to be orthonormal. How strong is this assumption? It seems a bit more realistic to assume v_j^{(k)}'s are in general position. Also, my impression is that this will create quite some technical challenges. Correct me if I'm wrong. 3. Some theorem/lemma statements are not sufficiently clear. E.g., Theorem 2.1, line 120, 122, does 'recovered' still refer to partial recovery as in line 118? I suggest state it more formally, at least in the theorem statement. Please check thoroughly for similar issues. 4. Lemma 3.2 is the standard Gaussian conditioning lemma. It'll be nice to put a reference there. The standard one might be Lemma 11 of Bayati--Montanari (https://arxiv.org/abs/1001.3448), though there might be better ones. Please check the literature. 5. Proof of Theorem 3.1, I feel it's actually better to do the proof for general k instead of k=3, maybe in the full version of the paper if the authors wish. 6. The successive contraction algorithm is indeed quite natural. It reminds me of something called decimation AMP studied here https://arxiv.org/abs/2306.01412. That's in the context of *high*-rank matrix estimation (using AMP, but never mind that) and it estimates each component of the planted matrix sequentially and subtract it out for the next one. 7. I'm not sure if the authors are aware of the following line of work that analyzes MLE (which is computationally expensive) for tensor PCA using a novel technique. The results are not completely new since MLE has been well-understood and moreover their results require the assumption that the limit of overlap exists. However, their techniques involves standard RMT and seem conceptually more accessible. The authors may want to look into or even consider citing some of them. - https://arxiv.org/abs/2108.00774 (square case) - https://arxiv.org/abs/2112.12348 (rectangular case) - and related works by roughly the same authors - https://arxiv.org/abs/2110.01652 (spectra of contracted tensors) Minor things: - I found it slightly weird that the citations are in the format (1), (2), etc. Note that the same format is also used for bullet points (see line 95, 110). Very similar notation is also used for equation numbers (1.1), (1.2). To avoid confusion, I suggest use the more standard [1], [2] or [ABC20], [XYZ99] format for citation. - Sometimes the first letter of 'proof of Theorem XXX' is not capitalized. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed comments and questions! We are familiar with the information-computation gap for tensor PCA and agree with your suggestion that is should be mentioned in the introduction. Here are point-by-point replies: 1. Yes, that is a typo. 2. Yes, we will add a clarification. 3. It is not an inconsequential assumption, although it is a generalization to most of the tensor PCA literature which assumes r=1. “Low rank” does not generalize from matrices to tensors as nicely. For example, tensor rank is often defined to be the minimal $r$ such that $T = \sum_{j=1}^r v_1^{(j)} \otimes \cdots \otimes v_k^{(j)}$, where $\{v_1^{(j)}, \ldots v_k^{(j)}\}$ are not necessarily orthogonal. A tensor that is low-rank under this definition may not have a decomposition into an equal number of terms with orthogonal vectors (this decomposition does always exist in the matrix case $k=2$). 4. Yes, this is partial recovery. We will clarify this. 5. Yes, this is standard. Thank you for the suggested reference, we have added it. 6. We elected to write out the proof for $k=3$ for sake of simplicity, the general proof is nearly identical, although sums such as (3.2) become $k$ sums over each tensor axis.
Summary: This paper is concerned of the tensor PCA problem (low-rank tensor recovery with Gaussian noise), in which the authors proposed three new ways to approach the problem with theoretical guarantees. Strengths: 1. This paper uses a succinct way to introduce to readers 3 different approaches of tensor PCA, all with formal guarantees. Therefore, a reader can quickly identify the approach they personally need. 2. All of the results are based on well-established matrix PCA results, so it is easier for readers to understand, and also gives people stronger confidence to use, as tensor analysis is notoriously hard. Weaknesses: 1. Although the results themselves are fine, and solid to my eyes, but none of them provide new insights to the problem, nor did the authors come up with new or innovative techniques. It just seems to me that all this paper is doing is adapting a few matrix PCA techniques to the tensor regime, and only perform minimal changes to the procedure so that the framework can be used. This does not mean the results themselves are inferior, but the merit this paper brings is very limited beyond the verbatim theorems. 2. The paper presented three different approaches, but did not compare/contrast the three, so unfamiliar readers may not know which is the best technique to apply. 3. The simulation is limited (however, this is mostly a theoretical paper, so acceptable). Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review of our manuscript. Tensor unfolding and partial trace, we discover, are asymptotically equivalent in performance. Successive contraction is shown in Section 4 to achieve exact recovery above the common threshold of tensor unfolding/partial trace, and we recommend it be used in practice. The benefits of successive contraction over tensor unfolding/partial trace in simulations is shown in Figure 2. We have conducted additional simulations in the case where $n_1 = n_2 \ll n_3$; please see the plots attached to our response. --- Rebuttal Comment 1.1: Comment: Thank you for your response, and I acknowledge your arguments. I'm not saying the results themselves are not good, however, in my eyes they do not bring any new high-level ideas to the table, therefore probably better fitted for a journal submission where the authors can really flesh out the details. After deliberation I am keeping my original scores, and I thank the authors for their interactions again.
Summary: This paper studies tensor principal component analysis (PCA) in a high-dimensional asymptotic framework, where each array dimension tends to infinity. The authors analyze matricization-based approaches, which convert tensors to matrices and then apply spectral methods. They fully analyze tensor unfolding or reshaping and the partial trace approach, discovering their asymptotic equivalence in performance. They identify sharp thresholds in signal-to-noise ratio above which these algorithms partially recover the signal and provide exact formulas for their limiting performance. Finally, they introduce an iterative algorithm based on successive contractions of the tensor, which achieves exact signal recovery above the same thresholds characterizing partial recovery of unfolding and partial trace. The authors' approach relies on fundamental random matrix theory results. Strengths: The main strengths of the paper are: 1. The paper provides a comprehensive analysis of tensor PCA algorithms in a high-dimensional asymptotic framework, allowing for tensors of diverse dimensions and unrelated signals. 2. The paper introduces an iterative algorithm based on successive contractions of the tensor, which achieves exact signal recovery above the best-known algorithmic threshold. 3. The paper provides exact formulas for the limiting performance of the algorithms and identifies sharp thresholds in the signal-to-noise ratio above which these algorithms partially recover the signal. Weaknesses: The main weaknesses of the paper are: 1. The presented successive contraction is demonstrated to achieve exact recovery only asymptotically, while a non-asymptotic result would be more interesting to investigate. In fact, the authors should compare the obtained performance of the successive contraction method in a non-asymptotic regime with the MLE which was analyzed in [1]. 2. It is unclear how the successive contraction method differs from the tensor power iteration. In fact, [2] showed that the latter achieves also exact recovery above the algorithmic threshold when initialized with signal estimates using the tensor unfolding method. [1] Mohamed El Amine Seddik, Maxime Guillaud, and Romain Couillet. “When Random Tensors meet Random Matrices”. In: arXiv preprint arXiv:2112.12348 (2021) [2] Arnab Auddy and Ming Yuan. “On Estimating Rank-One Spiked Tensors in the Presence of Heavy Tailed Errors”. In: arXiv preprint arXiv:2107.09660 (2021) Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please refer to the weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The main limitations of the paper are: 1. The paper considers an asymptotic analysis for Tensor PCA assuming the dimension of the tensor to grow to infinity, while a non-asymptotic analysis or rather a quantification of the fluctuations would be interesting to investigate. 2. Most of the presented results in the paper are relatively known in the literature (Tensor Unfolding and Partial Trace). 3. Lack of comparison of the presented successive contraction method with the tensor power iteration algorithm. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
null
Rebuttal 1: Rebuttal: We thank the reviewers for the time and effort they have invested into the review of our manuscript, and for their helpful comments and suggestions. We would like to address the reviewers’ main criticism, that our results follow easily from PCA results of random matrix theory (RMT). The primary purpose of this work is to exhibit spiked matrix results (those of [2] as well as newer results of [1] and [3]) and demonstrate how they can be applied to machine learning problems. While well-known within RMT and to the reviewers, the machine learning community is less familiar (in our opinion) with this literature. There are numerous potential applications of this theory to machine learning, many of which we believe are underexplored. As machine learning problems often concern the setting where dimensions exceed sample size, the recent results of [1] and [3] (which generalize previous “proportional” results of [2] to the “disproportional” setting) may be particularly useful. Our work concisely demonstrates the advantages of this RMT-backed approach in the context of tensor PCA. Indeed, application of spiked matrix results yields simple and elegant theorems which may be easily read by theoreticians as well as practitioners. For example, analysis of the partial trace method, originally done by challenging matrix concentration and perturbation calculations, is reduced to a few careful lines. Though our results are admittedly purely asymptotic, there are no hidden constants or logarithmic factors as in [4] (which frequently give practitioners pause). Even for modest-sized tensors, our simulations demonstrate close agreement with theory. Moreover, this approach reveals the asymptotic exact equivalence in performance of tensor unfolding and partial trace. To the best of our knowledge, we are the first to make this discovery. Our paper makes clear that for a number of (seemingly different) algorithms, performance fundamentally derives from spectral properties of spiked matrices. We agree that what we have called successive contraction is simply a generalization of the power iteration algorithm of [5] and ought to be called as such. While previous analysis in [6] is limited to the setting where $n_1 \asymp n_2 \asymp \cdots \asymp n_k$, we make no assumptions on the relative rates of growth of $n_1, n_2, \ldots, n_k$. We have conducted additional simulations where $n_1 = n_2 = 50$ and $n_3 = 10000$ or $n_3 = 20000$, again demonstrating agreement with theory. Please see the attached plots. If accepted, we will clarify these questions in the revision. [1] G. Ben Arous, D. Huang, and J. Huang. Long Random Matrices and Tensor Unfolding. arXiv 208 preprint arXiv2110.1021, 2021. [2] F. Benaych-Georges and R. Rao Nadakuditi. The singular values and vectors of low rank perturbations of large rectangular random matrices. Journal of Multivariate Analysis, 111:120–135, 2012. [3] M. Feldman. Spiked Singular Values and Vectors under Extreme Aspect Ratios. Journal of Multivariate Analysis, 196, 2023. [4] S. Hopkins, T. Schramm, J. Shi, and D. Steurer. Fast spectral algorithms from sum-of-squares proofs: tensor decomposition and planted sparse vectors. Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, 178–191, 2016. [5] A. Montanari and E. Richard. A statistical model for tensor PCA. Proceedings of the 27th International Conference on Neural Information Processing Systems, 2:2897–2905, 2014. [6] M. E. A. Seddik, M. Guillaud, and R. Couillet. When random tensors meet random matrices. arXiv preprint arXiv:2112.12348, 2021. Pdf: /pdf/be4cc94dc7913a3d0f59143f64a2e19ad902e7d1.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
On the Ability of Graph Neural Networks to Model Interactions Between Vertices
Accept (poster)
Summary: The paper studies how to characterize the ability of Graph Neural Networks (GNNs) to model the interaction given a partition of vertices. They quanlify the interaction strength with separation rank, and prove that it is governed by walk index (number of walks starting from the partition boundary). This relation is empirically demonstrated in their experiments, where a GNN with higher walk index beats the one with lower Walk Index. Lastly, Walk Index Sparsification is proposed to perform edge deletion based on Walk Index, and it markedly outperforms other baselines in terms of preserving prediction accuracy. Strengths: - An overall interesting theoretical work that supports the intuition: the interaction between two parts in a graph depends on how much interconnected their boundary is. - The orginization is good, the theoretical analysis looks sound and the experiments are solid. The application to edge sparsification is impressive. Weaknesses: - Walk Index Sparsification is unable to scale up to large graphs with deep GNNs, given its $O(|E||V|^3\log(L))$ complexity. Correspondingly, the experiments only covered 2-WIS and 1-WIS. To further improve, the authors could discuss some potentially efficient implementations. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: As stated in Weaknesses, the computational overhead is a concern. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and feedback. We are glad that you found our theory interesting, supported by solid experiments, and with an impressive application to edge sparsification. We address your concern below, and would greatly appreciate it if you would consider increasing your score. > ​​Walk Index Sparsification is unable to scale up to large graphs with deep GNNs, given its $O(|E||V|^3 \log (L))$ complexity. Correspondingly, the experiments only covered 2-WIS and 1-WIS. To further improve, the authors could discuss some potentially efficient implementations. As discussed in lines 284 to 293 of the paper, a naive implementation of (L - 1)-WIS (i.e. WIS for depth L GNNs) indeed requires runtime cubic in the number of vertices. While exploration of more efficient exact implementations (including ones involving parallelization) is left for future work, we propose and evaluate an extremely efficient approximation, namely 1-WIS (i.e. WIS for depth 2 GNNs), which requires only linear runtime and memory. For example, we demonstrated the applicability of 1-WIS to depth 10 GNNs (see Appendix H) over the OGBN-ArXiv dataset, which has more than a hundred thousand vertices and a million edges. Running 1-WIS over OGBN-ArXiv takes only ~20 minutes on a single V100 GPU.
Summary: The paper proposes a new way to analyze the capacity of GNNs based on the complexity of interactions they can model across a partitioning of the nodes. In particular, this complexity is quantified via *separation rank*, and, for a certain kind of GNN with product aggregation, this rank is proved to scale with *walk index*, the number of random walks originating from a vertex on the partition's boundary. Experiments indicate that this finding generalizes beyond the particular GNN in the proofs. First, three different GNNs are shown to perform better at an image classification task when the data are converted to a graph so as to maximize walk index across the relevant partition. Second, the paper proposes WIS, a way to reduce the number of edges in the graph while maintaining GNN performance, based on maintaining high walk indices. WIS is shown to outperform prior methods on several real-world networks. Strengths: - Conceptual and theoretical explanations on the capacity of GNNs are an important topic, and this paper takes a fresh approach as compared to much of the WL-isomorphism-based work. - The paper is well-organized, and the writing is grammatical and easy-to-read. - Experiments show the superior efficacy of the proposed WIS method for graph sparsification relative to prior methods. Weaknesses: - While the machinery of the proof appears to be quite complex, I would appreciate some description of the proof concept in the main paper, even if it is at a very high level. - Related to the prior point on the proof concept, I am not totally convinced that walk index is the fundamental quantity here. Particularly the results in Figure 3 for Cora, where 2-WIS's performance is essentially matched by 1-WIS (which simply relates to node degrees), suggest that there may be a simpler explanation. To convince the reader, perhaps there could be more conceptual description of walk index and separation rank, and why the two are fundamentally linked. Or perhaps there could be more experiments showing that walk index is more predictive of performance as compared to other notions of connectivity. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Do you see this work as totally orthogonal to WL-isomorphism-based limitation work? As I see it, those results are all-or-nothing, in the sense that they show GNNs either can or cannot distinguish two kinds of graphs. The bound here instead is somehow softer and more nuanced, showing that the complexity at which the GNN can model some interaction (as quantified by separation rank) scales with walk index. Is there some sense in which, or some specific cases for which, the WL results are subsumed by your new results? - Related to the second 'weakness' above, do you see walk index as the fundamental notion here? If so, why? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Some limitations are noted throughout the work -- for example, it is noted that the theoretical results only formally apply to GNNs with product aggregation. A more collected discussion of limitations of the current analysis/understanding, and possible future directions, would be helpful. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive feedback and support! > While the machinery of the proof appears to be quite complex, I would appreciate some description of the proof concept in the main paper, even if it is at a very high level. Due to lack of space we deferred a high-level description of the proof concepts to Appendix A. For the camera-ready version, we will use the additional space to incorporate this content in the main paper. Thank you for the suggestion! > Related to the prior point on the proof concept, I am not totally convinced that walk index is the fundamental quantity here. Particularly the results in Figure 3 for Cora, where 2-WIS's performance is essentially matched by 1-WIS (which simply relates to node degrees), suggest that there may be a simpler explanation. To convince the reader, perhaps there could be more conceptual description of walk index and separation rank, and why the two are fundamentally linked. Or perhaps there could be more experiments showing that walk index is more predictive of performance as compared to other notions of connectivity. Our theory implies that the walk index is a fundamental quantity in the sense that it governs the strength of interactions that the analyzed GNNs can model across a partition of vertices. Intuitively, it characterizes the amount of computation a GNN performs to aggregate information across the partition. With regards to experimentation, it is challenging to judge this way whether the walk index is more fundamental than alternative quantities, the reason being that factors beyond expressivity (e.g. ones relating to optimization and generalization) may confound results. We did however show that WIS, which is explicitly based on the walk index, sparsifies edges better than known alternatives. 1-WIS is also based on the walk index (despite being expressible in simpler terms) so we do not view its strong performance as an indication that the walk index may not be the central quantity. We believe further exploration of WIS, and in particular regimes where higher-order WIS performs better than 1-WIS, is a promising direction for future work. > Do you see this work as totally orthogonal to WL-isomorphism-based limitation work? As I see it, those results are all-or-nothing, in the sense that they show GNNs either can or cannot distinguish two kinds of graphs. The bound here instead is somehow softer and more nuanced, showing that the complexity at which the GNN can model some interaction (as quantified by separation rank) scales with walk index. Is there some sense in which, or some specific cases for which, the WL results are subsumed by your new results? This work can be viewed as orthogonal to WL-based analyses of expressive power. Namely, WL-based analyses characterize which graphs a GNN can distinguish between. In contrast, for a given graph topology and fixed-size GNN, our work characterizes the type of functions the GNN can represent with respect to the vertex features, as measured by separation rank. We believe both approaches bear significance, yet while studies of the ability of GNNs to distinguish non-isomorphic graphs are abundant, much less is known about the type of functions GNNs can efficiently represent with respect to the vertex features. We hope our work can lead to further progress along this line, whether through the notion of separation rank or through other notions of complexity that take into account the vertex features. > Some limitations are noted throughout the work -- for example, it is noted that the theoretical results only formally apply to GNNs with product aggregation. A more collected discussion of limitations of the current analysis/understanding, and possible future directions, would be helpful. As you noted, we mentioned possible limitations of our work throughout. In the revised manuscript we will also include a concentrated account of them and future directions in the summary section. Thank you for the suggestion! --- Rebuttal Comment 1.1: Comment: Thank you for your response. I will keep my score to support the paper, though I hope that a few limitations will be noted clearly: - The theoretical results only apply to product aggregation, which is not commonly used to my knowledge, and, as Reviewer CGQv also noted, the aggregation function is important. - Empirically, k-WIS has not been shown to be more useful than 1-WIS (degree information). I think some demo to this effect would help corroborate the theory and overcome the point above about the aggregation function, even if the k-WIS variant is not scalable. --- Reply to Comment 1.1.1: Comment: We are adding to the text a concentrated account of limitations and directions for future work. The points you raised (which currently are mentioned throughout) will be noted clearly in this account. Thank you for the support!
Summary: This paper proposes to analyze the expressivity of graph neural networks (GNNs) by their ability to model interactions. The authors consider GNNs with product aggregation scheme, and analyze their ability to model interactions for a given partition of the graph through the notion of separation rank. The authors show quantitatively that these GNNs can model interactions better for partitions with higher walk index (Theorem 1). Motivated from their theoretical analysis, the authors propose an edge sparsification algorithm --- Walk Index Sparsification (WIS, Algorithm 1, 2) --- to drop edges that have least amount of impact on GNN's ability to model interations for a given partition in a given graph. The utility of WIS is demonstrated on both graph-level prediction and vertex-level prediction tasks. Strengths: Strengths: 1. The proposal to study the expressivity of GNNs via their ability to model vertex interactions is interesting. 2. The theoretical analysis is novel: by focusing on GNNs with product aggregation, the authors connect them to tensor networks, of which the notion of separation rank can be used to study the network's ability to model interactions. 3. The paper is overall well-written and easy to follow, where important technical details and illustrative examples are explained clearly in the supplementary materials. Weaknesses: Weaknesses: 1. The theoretical analysis focus on a special kind of GNNs with product aggregation, whereas the empirical evaluations are all done using standard GNNs with mean aggregation with ReLU nonlinearity. Although the authors meant to show that theory results can carry over to standard GNNs, they do not explain explicitly how such connections can be drawn. If the main contribution of the paper is... (a) to show that using separation rank to understand GNNs is relevant, then the authors should outline how the analysis carries forward to standard GNNs (b) to show that walk index of partitions in a graph determines GNN performance, then the authors should examine the case where the original graph is replaced with a fully-connected graph, and discuss GNNs with Graph Transformers (c) to propose an edge sparsification algorithm, then comparisons with stronger baselines are needed, such as DropEdge [1] and curvature-based graph rewriting methods [2]. 2. The conclusion that GNNs can model interactions better on partitions that have high walk index (i.e. more interconnected) seems to crucially depend on the choice of aggregation function is tied to the graph topology. However, it seems possible to use different aggregation/pooling function that delineates from the graph topology to capture interactions among partitions with low walk index. More explanations on how to interpret the results are needed. References: [1] Rong, Yu, et al. "Dropedge: Towards deep graph convolutional networks on node classification." arXiv preprint arXiv:1907.10903 (2019). [2] Topping, Jake, et al. "Understanding over-squashing and bottlenecks on graphs via curvature." arXiv preprint arXiv:2111.14522 (2021). Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The analysis focus on GNNs with product aggregation, i.e. multiplication of the neighborhood node's embedding vectors $\mathbf{h}^{(l, i)}=\odot_{j \in \mathcal{N}(i)}\left(\mathbf{W}^{(l)} \mathbf{h}^{(l-1, j)}\right)$, where $\mathcal{N}(i)$ depends on the graph topology. In this set-up, the authors show that such GNNs can model interactions among two partitions in a graph better if they are more inter-connected. Yet, earlier work by Cohen and Shashuwa [3] show that for the case of convolutional neural networks (CNNs), their ability to model interactions depend on the choice of pooling functions. In light of this result, it seems plausible for GNNs to model partitions that are less-connected (e.g., Fig 2 -left) if one delineates the aggregation function from the graph topology, which is also commonly used in the literature of graph rewriting (e.g. [2]). Can the authors provide more explicit interpretations of their results and comparison with related work using separation rank for CNNs/RNNs? 2. As a thought example: for GNNs with product aggregation, do they achieve maximally separation rank for a fully-connected graph? How would the WIS algorithm compare to a simple baseline: replacing the original graph with a fully-connected graph, followed by randomly dropping edges (as opposed to (i) edges are removed randomly from the original graph)? This baseline is motivated from [4], where the authors show that replacing the original graph with a fully-connected graph at the output layer can boost GNN performance. 3. In line 264-266, the authors explain that the edge sparsification algorithm should first choose the partitions of interest, whereas in Algorithm 1 and 2, such choice is default to partitions induced by singletons and the rest of the nodes. It seems possible that the optimal partition choice depends on the downstream tasks (e.g., graph-level clustering versus node-level prediction). Can the authors provide more details and recommendations? References: [3] Cohen, Nadav, and Amnon Shashua. "Inductive bias of deep convolutional networks through pooling geometry." arXiv preprint arXiv:1605.06743 (2016). [4] Alon, Uri, and Eran Yahav. "On the bottleneck of graph neural networks and its practical implications." arXiv preprint arXiv:2006.05205 (2020). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitations were discussed, but can be explained more thoroughly through (1) detailed interpretations of the theoretical results (assumptions and applicability); (2) comparison with prior works that make use of separation rank to analyze expressivity of deep learning architectures. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback, for highlighting the interest and novelty of our theory, and for noting that the paper is well-written. We treat your comments and questions below. If our response is satisfactory, we would greatly appreciate it if you would consider raising your score. > (a) to show that using separation rank to understand GNNs is relevant, then the authors should outline how the analysis carries forward to standard GNNs Analyzing neural network architectures identical to those popular in practice is often notoriously difficult. Accordingly, it is customary for works in deep learning theory to analyze different variations of these architectures. Following numerous past works (see lines 144 to 152 of the paper), we analyze neural networks with product aggregation, and demonstrate the applicability of our conclusions to architectures with other aggregation types via experimentation, regarding formal proof of the applicability as an avenue for future work. As stated in lines 163 to 166, some of the past analyses of neural networks with product aggregation were later extended to account for additional aggregations. We believe our theory may be similarly extended, and intend to pursue this route in future work. > (b) to show that walk index of partitions in a graph determines GNN performance, then the authors should examine the case where the original graph is replaced with a fully-connected graph, and discuss GNNs with Graph Transformers Increasing connectivity improves expressivity in terms of separation rank, and there is evidence for it improving performance in practice. However, the performance in practice depends not only on expressivity but also on optimization and generalization, and from that perspective excess connectivity may lead to problems, for example over-smoothing and over-squashing. This is discussed in lines 203 to 213. Regarding comparison to Graph Transformers, our work focuses on message-passing GNNs. Analyzing the strength of interactions that Graph Transformers can model and comparing it against our results for message-passing GNNs is a direction we view as appropriate (and interesting!) for future research. > (c) to propose an edge sparsification algorithm, then comparisons with stronger baselines are needed, such as DropEdge [1] and curvature-based graph rewriting methods [2]. Edge sparsification algorithms, such as the one we propose, attempt to maintain the prediction accuracy as the number of removed edges increases, with the goal of reducing computational and/or memory costs. The sparsification is done once, as a preprocessing step, and is fixed during training. Thus, in their original form, DropEdge and graph rewiring algorithms such as that from [2], whose goal is improving prediction accuracy (rather than efficiency), are incomparable to edge sparsification methods. Specifically, DropEdge requires storing the whole graph throughout training, while graph rewiring algorithms typically add or remove only a few edges, with the resulting graph having roughly the same number of edges as the original one (or even more edges). One may try to adapt curvature-based rewiring algorithms to edge sparsification. We believe doing so and comparing them to WIS is an interesting topic for future work. > ...it seems plausible for GNNs to model partitions that are less-connected (e.g., Fig 2 -left) if one delineates the aggregation function from the graph topology, which is also commonly used in the literature of graph rewriting (e.g. [2]). The aggregation in the standard message-passing GNN formulation is done according to the graph structure. Hence we focus on this case in the paper. Indeed, it is possible to detach the pooling geometry from the input graph structure, in which case our results apply as is with the walk index over the input graph replaced by that of the graph defining the pooling geometry. We hope the above has addressed your inquiry; if not please let us know. > Can the authors provide more explicit interpretations of their results and comparison with related work using separation rank for CNNs/RNNs? CNNs and RNNs can be cast as GNNs with certain input graph structures – grid graph for CNNs and chain graph for RNNs. Accordingly, our separation rank bounds extend those established for CNNs and RNNs to GNNs with arbitrary input graph structures. We will mention this point in the text. Thank you for bringing it up! > How would the WIS algorithm compare to a simple baseline: replacing the original graph with a fully-connected graph, followed by randomly dropping edges (as opposed to (i) edges are removed randomly from the original graph)? Since the baseline ignores all information about the original graph structure, it is expected to yield poor prediction accuracy. To affirm this expectation, we ran experiments over the Cora dataset using GCN. Indeed, **it led to substantially worse test accuracy compared to the rest of the baselines and WIS**. Specifically, over the graphs obtained by the proposed baseline, the test accuracy was roughly 73% on average for each of the edge sparsity levels – 0%, 5%, 10%, …, 100%. This is around the test accuracy one gets when removing all edges, and in particular significantly lower than that achieved by WIS (or the other baselines) — cf. Figure 3 (left) in the paper. > “...It seems possible that the optimal partition choice depends on the downstream tasks (e.g., graph-level clustering versus node-level prediction). Can the authors provide more details and recommendations?” Excellent point. Indeed, the optimal choice of partitions can vary depending on the task at hand. As stated in lines 272 to 275, we focus on a specific instantiation of the general WIS scheme which is tailored for vertex prediction tasks (these are particularly relevant with large-scale graphs). Exploring other instantiations, and methods for automatically choosing partitions, are regarded as promising avenues for future work. --- Rebuttal Comment 1.1: Title: Explicit comparison of stable rank bounds obtained from this paper versus prior works Comment: Thank you very much for your detailed response. I have the follow-up question on your point of: *CNNs and RNNs can be cast as GNNs with certain input graph structures – grid graph for CNNs and chain graph for RNNs. Accordingly, our separation rank bounds extend those established for CNNs and RNNs to GNNs with arbitrary input graph structures.* Would it be possible to provide an explicit comparison, showing the separation rank bound obtained from your work for GNN, its applications for CNNs and RNNs (viewed as special instances of graphs), along side with previous bounds obtained solely for CNNs and RNNs? I believe this will increase the technical strength of the paper. --- Reply to Comment 1.1.1: Comment: Certainly! We provide such a comparison below. We will elaborate upon this point in the revised manuscript; thank you for the question! For a two-dimensional grid graph with $N$ vertices, a message-passing GNN can be viewed as a CNN. Given $I \\subseteq [N]$, recall that $C_I$ denotes the boundary of the partition induced by $I$, i.e. the set of vertices with an edge crossing the partition. The $L - 1$ walk index of $I$ in the grid graph satisfies $| C_I | \\cdot 3^{L - 1} \\leq WI_{L - 1} (I) \\leq |C_I| \\cdot 5^{L - 1}$ (this is because each vertex has degree $5$ including self-loops, except those on the edge or corner of the grid which are of degree $3$ or $4$ respectively). Hence, Theorem 1 implies that the separation rank of the analyzed GNN with respect to $I$ is at most $D_h^{O (|C_I| \cdot 5^{L - 1})}$, where $D_h$ is the network’s width. Similarly, for a chain graph with $N$ vertices, a message-passing GNN can be viewed as a bidirectional RNN. Given a partition induced by $I \\subseteq [N]$, the $L - 1$ walk index of $I$ in the chain graph satisfies $| C_I | \\cdot 2^{L - 1} \\leq WI_{L - 1} (I) \\leq |C_I| \\cdot 3^{L - 1}$ (this is because each vertex has degree $3$ including self-loops, except those on the endpoints which are of degree $2$). Thus, Theorem 1 implies that the separation rank of the analyzed GNN with respect to $I$ is at most $D_h^{O( |C_I| \cdot 3^{L - 1} )}$. The above results — special cases of our theory — extend those of [1,2,3,4], which study the separation rank of certain CNNs and RNNs. In particular, the results extend the CNN framework of [1,2] by introducing overlaps to convolution windows,$^\dagger$ and the RNN framework of [3,4] by introducing bidirectionality. Given that the models analyzed in [1,2,3,4] are weaker, the corresponding bounds on separation rank developed in these papers are lower. Consider for example the “odd-even” and “low-high” partitions from Figure 1(c) of [1]. Denote the subsets yielding these partitions by $I_{odd}$ and $I_{low}$ respectively. Applying our CNN result to $I_{odd}$ and $I_{low}$ yields upper bounds of $D_h^{O (N \cdot 5^{L - 1})}$ and $D_h^{O( \sqrt{N} \cdot 5^{L - 1} )}$ respectively. In contrast, the bounds in [1,2] (which were derived for a weaker model) are $D_h^{O(N)}$ and $D_h$ respectively. Similarly, for a chain graph, consider the “odd-even” partition induced by $I_{odd} := \\{ 1, 3, …, N \\}$ and the “low-high” partition induced by $I_{low} := \\{1, \ldots, N / 2 \\}$. Applying our RNN result to $I_{odd}$ and $I_{low}$ yields upper bounds of $D_h^{O (N \\cdot 3^{L - 1})}$ and $D_h^{O( 3^{L - 1} )}$ respectively. In contrast, the bounds in [3,4] (which were derived for a weaker model) are $D_h^{O(N)}$ and $D_h$ respectively. We hope the above fully addresses your question. If not please let us know and we will happily expand! --- $\dagger$ Note that CNNs with overlapping windows were studied in [5], but this paper did not provide upper bounds on the separation rank. [1] Cohen, Nadav, and Amnon Shashua. "Inductive bias of deep convolutional networks through pooling geometry." ICLR (2017) [2] Levine, Yoav, et al. "Deep learning and quantum entanglement: Fundamental connections with implications to network design." ICLR (2018) [3] Khrulkov, Valentin, Alexander Novikov, and Ivan Oseledets. "Expressive power of recurrent neural networks." ICLR (2018) [4] Levine, Yoav, Or Sharir, and Amnon Shashua. "Benefits of depth for long-term memory of recurrent networks." (2018) [5] Sharir, Or, and Amnon Shashua. "On the expressive power of overlapping architectures of deep learning." ICLR (2018).
null
null
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Experiment Planning with Function Approximation
Accept (poster)
Summary: The paper studies the problem of static experiment planning under the function approximation setting, which contrasts with the adaptive setting where the reward is not observable during the static planning phase. The paper proposes two static planning strategies: (1) EluderPlanner based on Eluder Dimension, and (2) Naive uniform sampler, and analyzes their regret, achieving a cumulative regret similar to the OLS and SquareCB algorithms in their adaptive counterparts. The paper takes a step further by designing a special tree-structured function class that highlights the gap between the proposed static planning algorithms and the lower bound of adaptive learning in terms of regret. Finally, the paper addresses the problem of model selection and demonstrates the possibility of constructing an $\varepsilon$-optimal policy. Strengths: Clear motivation and good writing: - I enjoyed reading the motivational example in the introduction that explains why adaptive learning scenarios can be challenging. - All the problem setups and the relations with the related literature are clearly explained. - The structure of the paper is good and easy to follow. - Assumptions are well explained, and the definition of Eluder dimension is helpful. Novelty: - Combining Eluder dimension and static experiment planning appears to be a novel approach. - The lower bound construction in Section 5 is informative and demonstrates a fundamental gap between static planning and adaptive learning. Theoretical soundness: - I read through the main text, and all the derivations seem correct to me. Weaknesses: Related work on reward-free RL: I found reward-free RL to be closely related, especially [1], which also describes reward-free RL under function approximation. It would be good to have a short paragraph discussing reward-free RL in general. [1] Shuang Qiu, Jieping Ye, Zhaoran Wang, Zhuoran Yang. "On Reward-Free RL with Kernel and Neural Function Approximations: Single-Agent MDP and Markov Game." Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: I am not familiar with Eluder dimension, but I find the construction of the function class in Section 5 extremely interesting. Will there be a similar gap for some of our familiar non-linear function classes, such as neural networks or kernel spaces, between static planning and adaptive learning? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We want to thank the reviewer for their kind comments. A more in-depth discussion of the relationship between experiment planning and Reward-free RL is a great idea to improve our manuscript. We will use the extra page granted at camera ready time to achieve this. We will add the citation the reviewer has mentioned to the final version of our work. “Will there be a similar gap for some of our familiar non-linear function classes”. This is an extremely interesting question! We believe the conditional tree structured class can be embedded as a subclass of a linear or more complex kernel or NN class which would address this question. It is certainly a very interesting avenue for future research.
Summary: The paper addresses the problem of experiment planning with function approximation in contextual bandit problems. It is intended to solve the scenario that the datasets include a large amount of contexts while no rewards. The authors propose two experiment planning strategies that are compatible with function approximation: an eluder planning and sampling procedure. The theoretical results guarantee that the policy converges to optimially under the time constrained by the eluder dimension. Strengths: This algorithm achieves in this setting: the reward signal is not abundant while there are a large amount of the contexts. The uniform experiment planning algorithms help solve the setting. Two important questions are answered in this problem: (1)are existing adaptive cumulative regret algorithms already optimal for simple regret minimization? (2) is static experiment planning sufficient to match the simple regret guarantees of adaptive learning for realizable contextual bandits? Weaknesses: 1.The paper does not provide a detailed comparison with existing experiment planning algorithms to show the improvement of the algorithm, especially in the function approximation like linear function. Enough related works and comparisons are needed in this paper. 2.The main results and the proof is lack of proof intuition to highlight the keypoint. The results need a clear and intuitive proof sketch to help readers follow the idea. The results are lack of a lower bound to guarantee the optimality of the algorithm. A tight bound is needed. If not, the gap of the lower bound and the upper bound is needed to state. 3.The paper could benefit from a more detailed discussion of the limitations of the proposed approach. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Is there a specific experiment to show the performance of the algorithm? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No. The paper could benefit from a more detailed discussion of the limitations of the proposed approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We want to thank the reviewer for their comments. We would like to start by pushing back on the reviewers comment regarding the comparison with existing experiment planning algorithms (see Weaknesses issue 1). The setting of experiment planning was introduced first by [37]. The only existing results we are aware of are the linear algorithm of [37]. As we have explained in our manuscript, our eluder Planner and sampler algorithms recover and subsume the bounds of [37]. This is because the eluder dimension of a linear class agrees with the underlying dimension of the space. We will certainly add more explanation regarding this to the final version of the paper. “The main results and the proof is lack of proof intuition“ Thanks so much for pointing out useful suggestions on how to make our paper easier to read and understand by readers. We will make use of the extra page to make our explanations clearer to the reader. “The results are lack of a lower bound … “ we are somewhat baffled by this comment. Deriving lower bounds for the general function approximation regime is an extremely complex question. For example, such bounds are not fully understood for the setting of adaptive learning algorithms (bandits and RL), a vastly more developed area of research than the much more recently developed setting of experiment planning. We hope these has persuaded the reviewer that matching lower and upper bounds for the function approximation regime in experiment planning is an extremely unrealistic measuring rod and beyond the reasonable scope of a neurips submission. --- Rebuttal Comment 1.1: Title: 5 to 6 Comment: Thank you very much for your detailed response! My concern has been well-addressed and thus I would like to increase my score.
Summary: The authors study the problem of planning for efficient data collection. Given initial data with a lot of contexts but no reward, the question is how do you devise a policy for data collection such that when executed in the real world, it learns the reward optimally to finally learn a policy with maximum rewards. They propose two methods for it, i) eluder planning and a sampling procedure ii) uniform sampler for the case when the number of actions is small Strengths: Proposed algorithms for experiment planning and theoretically analyze them. They also showed suboptimality of the existing adaptive learning algorithms which is interesting. Line 138-139, the intuition of the eluder dimension helps to understand its use case Weaknesses: The paper was hard for me to grab fully. Partially I lack background but majorly, I think it was the writing part. Some ideas to improve them: 1) Either go with a short, vague intro or a precise long intro: currently, it is a long but vague intro since the problem setting was not defined early, and it becomes difficult to understand. I have seen theoretical papers which define problem statements in the intro, and then it's a precise long intro. 2) Since you continuously refer to the sampling and planning phase, may be use a block diagram to represent them. In general, I believe the writing style can be improved. Secondly, I was wondering if it would make sense to also add at least a preliminary experiment showcasing your theory? It is hard to get intuition on how tight the bounds are. Technical Quality: 1 poor Clarity: 1 poor Questions for Authors: Intuitively when is it possible to have $d_{eluder}(F, B/T)$ a sublinear function in T? I am not able to interpret theorem 4.1 properly, on what factors does $\tilde{c}$ depends. Consider writing it as: let T be the smallest integer satisfying the $T\geq....$, then \for all $T' \geq T$ $\pi_{T'}$ is optimal? line 94,166: typos Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 1 poor Presentation: 1 poor Contribution: 3 good Limitations: no negative societal impact Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply appreciate the reviewer’s comments. We will make sure the final version of our manuscript contains more detailed descriptions of the experiment planning setting early on in the introduction. We would like to remind the reviewer we are not the first to come up with this problem setting. The authors of [37] were the first to define the experiment planning problem. Our assumption is that drawing from the related literature while defining the problems we are studying is a valid choice. We will take the reviewer’s concern into account when writing the final version of this work and will make sure we carefully balance in our introduction the references to related work and what material to include in our own manuscript. We think the reviewer’s suggestion of including a block diagram showing the planning and sampling phases is a great idea. We will add a simple pictorial representation to the final version. Although we wholeheartedly agree with the reviewer that an experimental evaluation of these algorithms would be of great interest, we consider the main contributions of this work to be theoretical. As such, and due to the plethora of results we have, we think adding additional content would obfuscate the main contributions of the current manuscript. We believe that a thorough empirical exploration of the problem of experiment planning with function approximation would be an important contribution in itself that we would love to see our work followed up by. One important roadblock that such work will have to overcome is that in general optimistic algorithms may be intractable under function approximation for general function classes. This is an exciting area of future research. Answers to miscellaneous questions 1. “When is it possible to have $d_{\mathrm{Eluder}}(F, B/T)$ to be a sublinear function in $T$?. In order to guarantee learnability the eluder dimension needs to scale sublinearly in $T$. This is the case for plethora of function classes such as linear classes and generalized linear models (where $d_{\mathrm{Eluder}} (F, B/T) \approx d \log(B/T)$). 2. “I am not able to interpret theorem 4.1 properly, on what factors does $\tilde{c}$ depends”. We are somewhat confused at the reviewer’s comment here. The theorem statement specifies $\widetilde{c}$ is a universal constant (i.e. independent of the algorithms, or function classes). The result simply states, if one wants to produce an $\epsilon$-optimal policy $T \geq $[expression in the paper] samples are sufficient. The expression to the right of $T$ is dependent only on $\mathcal{F}, \mathcal{A}, \delta$ and $\epsilon$. We would greatly appreciate if the reviewer could further expand on what aspect of this formulation is confusing. We would love to hear these to improve our work. --- Rebuttal Comment 1.1: Title: about point 2 Comment: I was wondering, for intuition, when can you say $\tilde{c}$ will be large or small? Can it take any value e.g., $\infty$, if so why would be it a useful bound? or Maybe information about where does it arise would help me in understanding. --- Reply to Comment 1.1.1: Title: Constant $\tilde c$ Comment: Thank you so much for your question. The constant $\tilde c$ will be something like $100$ or so. This comes from a repeated application of Hoeffding bounds. Think of these constants as a substitute of $O$ or $\Omega$ notation. They are completely independent of $\mathcal{F}$, $\delta$ and any other problem dependent parameters. Instead of tracking the precise value of this constant (which would have made the computations tedious and burdensome) we instead substituted them by these symbols. We are happy to explain more if this is not clear.
Summary: The study focuses on the experiment planning problem for contextual bandits with function approximation. The paper gives two algorithms for the problem: (1) EluderPlanning algorithm whose sample complexity depends on the elder dimension of the function class and matches the sample complexity of OLS algorithm by a online-to-batch conversion. (2) It also suggest that simple uniform sampling is good when action space is small, and the sample complexity matches the SquareCB algorithm by online-to-batch conversion. Then, the paper suggest that there is an exponential gap between adaptive sampling and passive sampling (experimental planning) by finding a class of tree-structured function. Besides, for this class of function, the paper show that SquareCB and OLS (two adaptive sampling algorithm) have exponential large sample complexity while there exist an algorithm with reliability (called adaptive tree sampling in appendix) can achieve polynomial sample complexity in this tree-structured function class. This suggest the sub-optimality of SquareCB and OLS. Finally, address the problem of model selection when the learner is presented with a family of reward function classes and the true reward function is inside one of the reward function classes. Strengths: The tree-structured function class constructed in section 5 is interesting and elegant. With this function class, the paper point out an exponential separation between adaptive sampling algorithm. Also, it points out that in realisable setting, the online-to-batch conversion of SquareCB and OLS is suboptimal for simple regret minimization. Weaknesses: (1) The proof of Lemma A.4 has a *major flaw*: in Line 439, it is not valid to choose $\eta_t$ as a random variable that is dependent on the martingale difference sequence. This is because the moment generating function (MGF) is required to derive lemma A.3 and thus also for lemma A.4. It is valid only when the MGF coefficient $\eta$ is a fixed value, which is also reflected in the claim of lemma A.3 (eta is fixed outside the probability bound). More generally, MGF coefficient $\eta$ can be a random variable that is independent of whole martingale difference sequence and then distribution of sum of martingale difference sequence is not affected by conditioning on $\eta$. (2) Because of (1) and Lemma A.4 serves a fundamental step for most theoretical claims in appendix A, section 3 and section 6. All claims related to lemma A.4 may not be correct, including but not limited to Lemma A.5, Lemma A.7, Lemma A.8, Lemma 3.6, Theorem 3.4 and Proposition 6.1. (3) There are a few typos that would affect the understanding of the technical details. (4) Even if all claims can be corrected, the main contribution is not clean in the paper where too many issues are discussed. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: (1) See weaknesses. If all problems can be correctly resolved, I would consider increasing the score. (2) There is a claim "Our results indicate the eluder dimension is not the sharpest statistical complexity measure to characterize learning in this setting." in line 326, but I don't find any support to this claim. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are extremely appreciative of the reviewer’s careful read of our manuscript. We are happy the reviewer identified the following strengths in our submission: this is the first work to explore the setting of experiment planning with function approximation and propose: a) Novel algorithms for Eluder Planning and Sampling and a thorough analysis of the Uniform Planner, B) the first lower bound drawing a gap between adaptive and static planning. We believe these contributions and in particular our lower bounds are of interest not only within the context of experiment planning but also more generally in the wider literature of bandits and reinforcement learning. The tree function class that forms the basis of our lower bound results shows how conditional structures embedded in adaptive learning problems are one of the components of what makes adaptive learning hard. The fact that existing adaptive learning algorithms based on for example the eluder dimension cannot give optimal rates in this example (as we show in this submission) implies the need for better more general algorithms for adaptive learning and a more complete treatment of the statistical complexity of this setting. We expect this insight to have wider consequences than what is contained in this submission. We will now address the reviewer’s main concerns. We apologize for the source of confusion and will try our best to explain it. We would like to start by noting the results of Lemma A.4 are true. An alternative proof of this results consists of modifying Theorem 2.2 from [“Bias no more: high probability data-dependent regret bounds for adversarial bandits and MDPs” ]. This result can be turned into an any-time bound by using a union bound over all $T \in \mathbb{N}$. Instead of applying the result with the probability level set to $\delta$ we use $\delta_T = \frac{\delta}{6*t^2}$. A union bound over all $T \in \mathbb{N}$ finalizes the result. The misunderstanding regarding the correctness of this lemma is the result of us authors deciding to add a proof based on a support lemma (Lemma A.3) borrowed from Lemma 9 in the paper entitled “Taming the Monster: A fast and simple algorithm for contextual bandits”. In the notation from the Taming the monster paper, it was our understanding that this result was true uniformly over all lambda values (our eta) for any fixed delta. This is how the authors of the “Taming the monster” paper seem to use it in the proof of their Lemma 11. Due to the reviewer’s careful reading of our manuscript we dug deeper into the origins of Lemma A.3 (Lemma 9 in Taming the monster”) and discovered that in its original form (Beygelzimer et al. (2011) ), the result holds for a specific delta and gamma. Although this seems to imply the taming the monster paper has a slight error their results are still true. Had Lemma 9 in “taming the monster” held uniformly - as we initially thought-, we thought setting the eta value to a random variable to be well justified. As we have mentioned before there are many ways to fix this issue. As we have mentioned above in our case one way is to instead derive Lemma A.4 using a uniform version of Theorem 2.2 from “Bias no more”. An alternative proof avenue is by invoking the results from Howard [2018] “Time-uniform, nonparametric, nonasymptotic confidence sequences”, although using these methods require more technical jargon in the form of sub-psi processes that we wanted to avoid using in our manuscript. In the case of the “Taming the monster results”, we believe it is also possible to derive a uniform in lambda (our eta) version of Lemma 9 from that paper (our Lemma A.3) by taking a union bound over an exponential grid of the lambda space. It may be of interest to the community to make the authors of that work aware of this issue. We again want to thank the reviewer for catching this and would really appreciate that in case this explanation has been found to be sufficient, if the reviewer could consider the technical merits of the remainder of the paper and raise their score if they think it to be appropriate. The reviewer mentions the confusing nature of the claim “our results indicate eluder is not the sharpest statistical complexity measure to characterize learning in this setting”. In this case we were simply referring to the gap that exists between the eluder planner and sampler algorithms and the uniform sampling strategy. When the size of the action set is very small the uniform sampling strategy will yield a sharper upper bound than the eluder bound. A really exciting area of future research is to understand the tradeoffs that exist between these two settings and figuring whether there is a single algorithmic solution and a more nuanced notion of statistical complexity that can bridge the gap between them. --- Rebuttal Comment 1.1: Title: About the major concern Comment: If you are going to use Theorem 2.2 in "Bias no more" to replace the lemma A.4 in your paper, you might need to justify all of your result remain the same at least in the dominant term. It might need a huge amount of derivations from the beginning, which might be another paper then. --- Reply to Comment 1.1.1: Title: The two lemmas are equivalent Comment: Dear reviewer we are confused by this question as Lemma A.4 and the anytime version we described above of Lemma 2.2 in "Bias no more" are exactly the same up to constants -and log factors. $V$ in bias no more is exactly $\sum_{\ell=1}^T E_\ell[X_\ell^2]$ of our submission. And $B^*$ in bias no more is exactly $R$ in ours. We are happy to explain more why it is exactly the same up to constants and log factors if this explanation is not clear. The any-time version of Lemma 4 in "Online Model Selection for Reinforcement Learning with Function Approximation" can also serve as a proof of Lemma A.4. This version would not suffer any logarithmic blowup. We are happy to provide an explanation if it is not clear.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: We study the static experiment planning for policy learning problem in contextual bandits, with focus on the general realizable case. The paper first presents an algorithm using reward free, extending similar ideas in [37] and leveraging the eluder dimension. The paper then shares a few theoretical results, including the competitive guarantee for uniform sampling policy, the gap between static and adaptive policies in a special case, and the results in model selection. Strengths: * The paper is technically sound. * The writing is very clear. * The paper studies an important problem and the first proposed algorithm is useful. Weaknesses: Numerical results on the finite sample performance would be important to have. For example, although the asymptotic rate for uniform sampling is competitive and this fact might be surprising, it is expected that its finite-sample performance is worse. It is also not fully clear how significant these theoretical findings are. For example, it is expected that there is some performance gap between adaptive and static policies, when the task is hard (though I appreciate formally proving it). Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. It would be helpful to add outputs to your Algorithm tables 2. In the main text, it would help to discuss the connection to [37] in terms of the reward-free emulation idea. 3. "Extracting Policy from Data" - the intuition for this part is hard to understand. Why uniform sampling + optimistic, instead of just using the greedy (or pessimistic) policy? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Extensions to safe exploration and/or policy evaluation tasks would be meaningful and worthy some discussions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: “Numerical results on the finite sample performance would be important to have” Although we wholeheartedly agree with the reviewer that an experimental evaluation of these algorithms would be of great interest, we consider the main contributions of this work to be theoretical. As such, and due to the plethora of results we have, we think adding additional content would obfuscate the main contributions of the current manuscript. We believe that a thorough empirical exploration of the problem of experiment planning with function approximation would be an important contribution in itself that we would love to see our work followed up by. One important roadblock that such work will have to overcome is that in general optimistic algorithms may be intractable under function approximation for general function classes. This is an exciting area of future research. The reviewer raises a couple of good clarification points. If our work gets accepted at the conference we will make use of the extra space to better explain a few things. We will add a couple of explanations here to hopefully allay the reviewer’s concerns, we will add richer explanations in the final version of our manuscript. 1. “It is also not fully clear how significant these theoretical findings are.” this is the first work to explore the setting of experiment planning with function approximation. We believe our manuscript has a couple of important results, for example a) Novel algorithms for Eluder Planning and Sampling and a thorough analysis of the Uniform Planner, B) the first lower bound drawing a gap between adaptive and static planning. We believe these contributions and in particular our lower bounds are of interest not only within the context of experiment planning but also more generally in the wider literature of bandits and reinforcement learning. 2. “it would help to discuss the connection to [37] “ Our work subsumes that of [37] since in the linear case our Eluder Planner and Sampler procedures recover the same rates as [37]. We will make sure this is fully explained in the final version of our work. 3. “Extracting Policy $\hat{\pi}_T$ from Data - the intuition for this part is hard to understand”. We prove the Eluder Sampler Planner procedure works by showing the simulated $\widetilde \pi_t^{\mathrm{opt}}$ sequence (produced via optimistic evaluations) satisfies a regret bound. This implies that sampling a uniform policy can be used to turn this ‘online’ regret bound to a PAC guarantee. The same argument would not work under a greedy or pessimistic policy. This does not preclude it being possible to show either greedy or pessimistic policies would be appropriate choices, but it would require a whole set of different techniques to prove this. We agree this is an interesting follow up research question. 4. “Extensions to safe exploration and/or policy evaluation tasks would be meaningful” The reviewer has identified another set of interesting follow up directions. The problem of experiment planning under constraints is an important and yet unexplored area of research. We would be really happy to see more research efforts being spent towards better understanding all facets of experiment planning. --- Rebuttal Comment 1.1: Title: Follow up Comment: Dear Reviewer, We wanted to reiterate our commitment to have the reviewer's questions addressed. Please let us know if the response above addressed the reviewer's concerns. If these have been adequately addressed we would very much appreciate any indication of this. Thanks so much! The Authors
null
null
null
null
null
null
Greedy Poisson Rejection Sampling
Accept (poster)
Summary: The paper proposes a new relative entropy coding algorithm for compression without quantization. Compression without quantization is an exciting line of research that tries to eliminate training-test mismatches in learned compression by avoiding discrete representations altogether. Consequently, one can losslessly compress data in a continuous latent variable model, such as a VAE, without quantizing the latent representation. The downside of existing algorithms for relative entropy coding is that their runtimes are intolerably slow. The paper makes a big step towards more runtime efficiency by designing new efficient rejection sampling approaches for transmitting samples from a variational posterior, especially in cases where the likelihood ratio between prior and posterior is uni-modal.  Strengths: + Relative entropy coding is a somewhat under-appreciated but up-and-coming field of research at the intersection between information theory and machine learning. There are only a few papers in this domain. (For example, recent advances made compression in diffusion models possible [Theis et al., 2022].)  + The paper significantly improves runtime efficiency for these algorithms, paving the way toward its scalable deployment. Compared to runtime complexities exponential in the Renyi divergence, it achieves a runtime *linear* in the KL divergence between the target and proposal distributions. Notably, the involved algorithm is conceptually much simpler (and in practice faster) than relative entropy coding with A* coding, which achieves the same asymptotic scaling.  + Beyond its base version, the paper also proposes two computationally much more efficient extensions: parallel GPRS and branch&bound GPRS. Both approaches rely on non-trivial mathematical insights, e.g., additivity of a Poisson process.  + Moreover, in one-dimensional channel simulation problems where the density ratio is unimodal, the algorithm displays the theoretically-optimal runtime (unlike existing approaches such as A* sampling) + The paper shows a nice combination of solid empirical results and theory. For example, most claims on the runtime complexities and theoretical guarantees (finite runtime) are proven rigorously.  Weaknesses: See questions below. There are no immediate weaknesses evident, but some clarifying questions need to be addressed. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: * Could the authors discuss in depth the commonalities and differences to a recent paper with a similar title, “adaptive greedy rejection sampling” (https://arxiv.org/pdf/2304.10407.pdf) that the authors already mentioned?  * The presentation at the beginning of Section 3 is very dense. For example, for an arbitrary P and Q, how is w_p approximated—analytically or based on samples? How does that affect the numerical solvability of Eq. 2? * It appears that the stretching function sigma is learned anew for every datum. How does this aspect affect the runtime, and is it respected in the scaling analysis? A discussion would be adequate.  * Parallelization and branch & bound: it appears that the parallelization affects the cost to *encode* data only, but not to *decode* data. Provided this is correct, could the authors explain why the encoding step is a bigger concern than decoding? In practice (e.g., video streaming), one mostly cares about reducing *decoding* complexity.  * Could the authors also comment on how they propose to simulate from the Poisson process restricted to subregions? It is unclear why this saves in runtime performance since sampling from a spatially-restricted Poisson process again amounts to rejection sampling.  Writing suggestion: * It took me a long time to understand why a temporal poisson process construction was used, where the original communication problem is time-independent. Maybe just say early on that including a temporal poisson process replaces rejection sampling with a deterministic criterion, a trick that was previously proposed for numerical integration (Maddison). Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Limitations in terms of scaling and speed are sufficiently addressed. Societal impacts are a less important topic for general-purpose data transmission protocols. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their glowing review of our work; we are delighted that the reviewer shares our excitement for relative entropy coding/channel simulation! We answer the reviewer's questions below and will gladly answer any further questions. > Could the authors discuss in depth the commonalities and differences to a recent paper with a similar title, "adaptive greedy rejection sampling"? Of course! The "obvious" commonality between GPRS and adaptive greedy rejection sampling (AGRS) is that they are both rejection samplers for channel simulation, i.e. they encode exact samples with optimal expected codelength, but their runtime is stochastic. This contrasts with the depth-limited version of A* sampling, an importance sampler whose runtime is fixed but only returns approximate samples. Moreover, GPRS and AGRS are greedy algorithms, though they differ in _how_ they are greedy. As we explain in lines 207-210, we call GPRS greedy because it performs a greedy search over the points of a Poisson process instead of the non-greedy search performed by A* sampling/coding. On the other hand, AGRS is greedy in the sense defined by Harsha et al. (2009): it greedily maximizes the acceptance probability of a proposed sample under a certain correctness constraint; see Section IV in Harsha et al. (2009) for the precise explanation. These two notions of greediness are likely related, and we are actively investigating this link. Furthermore, Flamich & Theis (2023) also propose a space partitioning method that empirically appears to achieve an exponential speedup similar to GPRS's branch-and-bound variant. However, the authors' space partition differs significantly from ours in several ways: The authors' method is based on dithered quantization (DQ; Agustsson & Theis, 2020) and hybrid coding (Theis & Yosri, 2022), while we based GPRS's branch-and-bound variant on a bisection search method, inspired by the AS* and AD* variants of A* coding. Moreover, our method comes with theoretical guarantees on the correctness and runtime of the algorithm, while Flamich & Theis do not even prove the correctness of their space partitioning variant, let alone provide guarantees on the runtime; all their results are empirical. That said, it is interesting whether their DQ-based space partitioning method could be adapted to be used by GPRS too. > ... for an arbitrary P and Q, how is w_p approximated—analytically or based on samples? How does that affect the numerical solvability of Eq. 2? This is a good question; unfortunately, we lack a good answer for the general case. Appendix G provides analytic solutions for $w_P$ and $w_Q$ for the most relevant practical cases, i.e., the uniform, Gaussian and Laplace distributions. Hence, numerical integration is fine in these cases. Evaluating $w_P$ and $w_Q$ even approximately and studying its effects on the numerical integration in more general cases is an interesting future direction for research. As a first step, we are currently investigating if there is a natural family of distributions (e.g. natural exponential families) for which $w_P$ and $w_Q$ can always be analytically computed. In the camera-ready version, we will add a clarifying discussion on this and note the challenges the reviewer highlighted as interesting future research directions. > It appears that the stretching function sigma is learned anew for every datum. How does this aspect affect the runtime, and is it respected in the scaling analysis? We are not quite sure what the reviewer means by "datum". If they mean target distribution, then this is correct; we need to perform the numerical integration during each encoding procedure; unfortunately, there doesn't seem to be a good way to precompute $\sigma$. In our runtime analysis, we assumed that we could evaluate $\sigma$ in $\mathcal{O}(1)$ time, and we used the number of samples simulated by the algorithm as the proxy for the algorithm's runtime, as it is the dominant factor. However, reducing the complexity of the numerical integration is an important practical concern, and we will note this as future work in the camera-ready version. > Parallelization and branch & bound: it appears that the parallelization affects the cost to encode data only, but not to decode data. Provided this is correct, could the authors explain why the encoding step is a bigger concern than decoding? In practice, we use the step count as the random seed (mixed with a global seed) for the proposed samples. Hence, the decoder only needs to set the step count they received as their PRNG seed and simulate a single sample from the proposal distribution. In the camera-ready version, we will add a section to the appendix discussing the basic implementation details for GPRS. > Could the authors also comment on how they propose to simulate from the Poisson process restricted to subregions? It is unclear why this saves in runtime performance since sampling from a spatially-restricted Poisson process again amounts to rejection sampling. Great question! The reviewer is correct in the general case; simulating from arbitrarily truncated distributions is computationally hard. However, we only truncate to intervals in the one-dimensional branch-and-bound variant of GPRS. We can use generalised inverse transform sampling in this case: Suppose we have a real-valued random variable $X$ with CDF $F$ and inverse CDF $F^{-1}$. Then, we can simulate $X$ truncated to $[a, b]$ by simulating $U \sim \mathrm{Uniform}(F(a), F(b))$ and computing $X = F^{-1}(U)$. ## References - Agustsson, E., & Theis, L. (2020). Universally quantized neural compression. In NeurIPS 2020 - Flamich, G., & Theis, L. (2023). Adaptive Greedy Rejection Sampling. arXiv preprint - Harsha, P., Jain, R., McAllester, D., & Radhakrishnan, J. (2009). The Communication Complexity of Correlation. IEEE Transactions on Information Theory - Theis, L., & Ahmed, N. Y. (2022). Algorithms for the communication of samples. In ICML --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: I have read the authors' response. My rating remains high. I hope the authors use my feedback not only to respond to me as a reviewer but also adopt changes to improve the paper's clarity in terms of practical aspects of their algorithm, e.g., when it comes to the stretching function and calculating w_p. Overall, I congratulate the authors on their nice work.
Summary: The paper addresses the problem of representing a target distribution using the least possible number of bits. The authors refined the idea of encoding a sample from the target distribution as the first sample from the proposal distribution that passes a rejection sampling condition. The refined approach is shown to achieve runtime optimality. Strengths: The idea of using a Poisson process to encode a target distribution is powerful. The strategy is well known but further investigations and attempts to make it more feasible could have an impact in various domains. Weaknesses: The authors' technical contribution. could have been outlined more precisely. It is hard to understand why the new strategy is expected to be better than the A* algorithm or naive rejection sampling. The authors should have explained why "a criterion that does depend on the time variable" is better. In the introduction, the authors say that the distribution is assumed to be unimodal and 1 dimensional. This is a strong assumption and it is unclear why the proposed method requires it. Similar assumptions are made in Flemich 22. The authors may have specified whether these are also required in Maddison 16. The authors should have found a way of testing the algorithm on real-world data. The text could be more curated and structured. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - What are the requirements the target and proposal distributions should fulfil? Why do the distributions need to be 1-dimensional? Is this only for theoretical purposes (i.e. to estimate the runtime bounds)? Or would it be also a practical constraint? - How expansive is it to solve the ODE for computing \sigma? Should this be included in the total runtime? - How is the density ratio estimated? Is the method feasible if the target distribution is unavailable? - What are the key differences between the proposed algorithm, A*, and the approach described in Maddison 16? Why is rejection sampling supposed to be better than importance sampling? It would be good to include an intuitive explanation of why and when GPRS is faster. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors do not discuss the limitation of the proposed approach. It is unclear what assumptions are required for the proofs. The theoretical runtime of related methods is not reported. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and feedback on our work. We would like to begin by clarifying a crucial point that, in our experience, is the most common source of confusion regarding channel simulation. > The paper addresses the problem of representing a target distribution using the least possible number of bits. This is not quite correct; in channel simulation, our goal is **not** to encode the target distribution so that the decoder can recover it. Please see lines 25-27 and 136-139 in our paper that outline this. As the reviewer notes in the second sentence of their summary, we only seek to encode a sample from the target distribution. We now address the reviewer's concerns and questions. > The authors' technical contribution. could have been outlined more precisely. We outline our technical and empirical contributions on lines 56-67, visually depict all GPRS variants in Figure 1, sketch GPRS's construction in Section 3, and construct and analyse it rigorously in the Appendices. Did the reviewer find these too vague or mathematically imprecise or simply that the writing needed to be improved? We are happy to incorporate any concrete feedback the reviewer has! > It is hard to understand why the new strategy is expected to be better than the A* algorithm or naive rejection sampling. This is a tricky question that would take more space to answer than we have available for this rebuttal; if the reviewer is interested, we are happy to follow up with more detail! Note that the expected runtimes (in terms of the number of samples simulated) of GPRS and naive rejection sampling (NRS) are all equal to $2^{D_\infty[Q || P]}$ and the only thing that differs is the codelength. Here, GPRS performs better than NRS because by only accepting samples below the graph of $\phi$ and rejecting everything above, GPRS concentrates the acceptance probability of samples in the earliest step possible. We see this best in the discrete case, where we might draw the same sample $X_k = X_{k+1}$ from the proposal twice in a row. In this case, GPRS will always accept $X_k$, whereas NRS might reject $X_k$ but accept $X_{k+1}$. At a high level, GPRS's branch-and-bound variant is better than A* coding's because it immediately terminates once it finds the sample it will accept. On the other hand, when A* sampling finds the sample it will eventually accept, it does some "extra useless work" by checking more samples due to its non-greedy acceptance criterion. In the camera-ready version, we will add an explanation of why GPRS outperforms NRS and A* coding based on our answers above. > The authors should have found a way of testing the algorithm on real-world data. We developed a rejection sampler; what kind of test using real-world data does the reviewer have in mind? > What are the requirements the target and proposal distributions should fulfil? Why do the distributions need to be 1-dimensional? Is this only for theoretical purposes (i.e. to estimate the runtime bounds)? Or would it be also a practical constraint? As we mentioned in lines 82-84, we can define the standard version of GPRS over very general probability spaces. We only require that $dQ/dP$ be bounded, meaning that GPRS is theoretically as widely applicable as NRS. On the other hand, the branch-and-bound version requires 1-dimensional distributions over $\mathbb{R}$ because it can be considered a stochastic bisection search, which is naturally defined over $\mathbb{R}$ and extending it to more general spaces is highly non-trivial. > How expansive is it to solve the ODE for computing \sigma? Should this be included in the total runtime? Our analysis in the paper assumes that we can evaluate $\sigma$ in $\mathcal{O}(1)$ time; we will clarify this in the camera-ready version. Of course, in practice, we need to ensure that we can cheaply do the numerical integration, but we leave this for future work. > How is the density ratio estimated? Is the method feasible if the target distribution is unavailable? GPRS wouldn't work very well as a general-purpose sampling algorithm in its current form. In this paper, we are only interested in applying it to channel simulation, where it is reasonable to assume that the encoder knows the density ratio exactly. > What are the key differences between the proposed algorithm, A*, and the approach described in Maddison 16? The approach proposed in Maddison 16 _is_ A* sampling, and channel simulation is not discussed. For a comparison of GPRS and A* _coding_, please see lines 207-214 and 291-300 in our paper. > Why is rejection sampling supposed to be better than importance sampling? The answer to this question depends on what problem we are applying the sampler to and what the reviewer means by "better". Rejection sampling yields an exact sample from the desired target distribution, but its termination time is stochastic. In contrast, importance sampling has a fixed runtime but only produces an approximate sample. > It would be good to include an intuitive explanation of why and when GPRS is faster. See our response above for a brief discussion regarding the speed comparison of the algorithms. > The authors do not discuss the limitation of the proposed approach. It is unclear what assumptions are required for the proofs. As noted in the paper, we intentionally left a lot of discussion in the main text informal and only sketched the constructions of the different variants of GPRS in Section 3 to illustrate the important, high-level ideas. In each section, we refer the reader to the appropriate parts of the appendix for the fully rigorous development of each result. Could the reviewer elaborate on what they found lacking and needing improvement, please? _We will gladly answer any further questions from the reviewer. On the other hand, should we have answered the reviewer's concerns adequately, we kindly ask the reviewer to consider raising their score._ --- Rebuttal Comment 1.1: Title: Thank you for your answers Comment: After reading the authors' answers and the other reviewers' comments, I am ready to support acceptance. I have raised my score accordingly. I still have concerns about the practical application of the scheme but recognise this is a theoretical work. Real data evaluation is probably not needed.
Summary: This paper focuses on the channel simulation problem, which finds applications in stochastic lossy compression. In contrast to the importance sampling approach A* coding (Flamich et.al. 2022) for channel simulation with Poisson processes, this work adopts a rejection sampling method. They demonstrate that the standard sampling approach may yield suboptimal code lengths. As an alternative, the authors propose a novel greedy rejection sampling algorithm called the Greedy Rejection Sampling (GPRS) algorithm. This algorithm composites the density ratio $r = \frac{dQ}{dP}$ with an invertible function referred to as the "stretch" function $\sigma$ that utilizes the temporal structure of the Poisson process without changing the target distribution. Theoretical findings establish the correctness and optimality of the proposed GPRS algorithm in terms of code length. Furthermore, the expected runtime of GPRS matches the runtime of standard rejection sampling. In addition to the vanilla GPRS, the authors introduce parallel and Branch-and-bound variants to further enhance the runtime of the algorithm. Specifically, the Branch-and-bound GPRS demonstrates a provable runtime improvement over A* coding when the density ratio exhibits unimodal characteristics. To validate the theoretical results, the authors conduct several toy experiments. Strengths: - Mathematical derivations look rigorous. - This paper is well-written and easy to follow. - Theoretical results seem to be compelling. The authors successfully demonstrate, through rigorous analysis, that the GPRS algorithm outperforms both standard rejection sampling and A* coding in terms of either runtime or code length in some cases. Weaknesses: - Some baselines and closely related works are not compared theoretically or empirically (See the first two bullets in Questions). Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - GPRS is thoroughly compared to A* coding both theoretically and experimentally. However, it is unclear why the baseline standard rejection sampling (Algorithm 2) is not empirically compared in Section 4 to validate the theoretical assertion that GPRS enhances the code length efficiency compared to standard RS. - Additionally, while the more recent works dithered quantization methods (DQ) and greedy rejection coding (GRC) are briefly mentioned in the related works section, there is a lack of theoretical or empirical comparisons with GPRS. Although these methods may have distinct formulations and proof techniques, conducting some comparisons would be valuable in justifying the significance of GPRS and its potential advantages over alternative approaches. - GPRS needs to numerically solve an ODE for $\sigma^{-1}$ before the time steps. What about the extra cost in runtime? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their nice comments; we address the reviewer's questions below. > GPRS is thoroughly compared to A* coding both theoretically and experimentally. However, it is unclear why the baseline standard rejection sampling (Algorithm 2) is not empirically compared in Section 4 to validate the theoretical assertion that GPRS enhances the code length efficiency compared to standard RS. We omitted standard RS from the codelength comparison because its codelength is provably suboptimal (as we outline in Problem 1 in Section 2.2 and show in Appendix I), and hence including it in Figure 2 would be uninformative. On the other hand, Figure 2 does more than verify our theorems regarding the expected runtime of the different algorithms; it also demonstrates that our upper bound on GPRS's codelength appears quite tight, and empirically it tends to heavily concentrate around its mean and that the mean appears to be quite robust in that it closely lines up with the median. In the camera-ready version of the paper, we will add clarifying sentences based on our response above in the experiments section (Section 4). > Additionally, while the more recent works dithered quantization methods (DQ) and greedy rejection coding (GRC) are briefly mentioned in the related works section, there is a lack of theoretical or empirical comparisons with GPRS. Although these methods may have distinct formulations and proof techniques, conducting some comparisons would be valuable in justifying the significance of GPRS and its potential advantages over alternative approaches. **Regarding GRC:** We agree with the reviewer that in the current version of the paper, the comparison between GRC and GPRS in terms of high-level theoretical properties and empirical performance is too loose. We decided to omit GRC from Figure 2 because we felt it made the figure too cluttered; however, in the camera-ready version, we will add another section in the appendix with more empirical comparisons and specifically comparisons to different variants of GRC there. Furthermore, we will extend the discussion of GRC in the related works section and compare their theoretical properties more precisely by contrasting the exact bounds on their runtime, for example. **Regarding DQ:** Unfortunately, we are uncertain how to compare DQ with GPRS best as its formulation is significantly different. In fact, there isn't currently a complexity-theoretic framework that would cover all channel simulation protocols. We are actively working on this as part of our research agenda. In our paper, since we developed a rejection sampler, the number of samples simulated by the algorithm is a natural proxy for the computational time complexity that lends itself to theoretical analysis. We assume all other aspects (such as simulating a single sample from the proposal or evaluating sigma) take $\mathcal{O}(1)$ time. However, this proxy is inappropriate for DQ, as it is not a rejection or importance sampler and only needs to simulate one sample from the proposal. Furthermore, DQ only applies to cases where both the target and proposal distributions are uniform, and in this case, it enjoys a clear advantage over GPRS. In the camera-ready version, we will clarify that we are using the number of samples simulated by GPRS as a proxy for the algorithms' time complexity and assume that all other operations can be carried out in constant time. Based on our answer above, we will also discuss the difficulties of comparing DQ with GPRS. > GPRS needs to numerically solve an ODE for $\sigma^{-1}$ before the time steps. What about the extra cost in runtime? This is an important practical question, and currently, we lack a good answer to it. As discussed above, in our runtime analysis, we assumed that evaluating $\sigma$ or its inverse could be done in constant time and disregarded practical implementation details as it was not a hindrance to carrying out our experiments (we used Scipy's `odeint` function to solve the ODE which was sufficiently fast in practice). That said, developing a cheap approximation to $\sigma$ or its inverse is an important future direction, and we are actively working on it. Thus, in the camera-ready version, we will discuss the practical considerations for the numerical integration and point out that analyzing the complexity of evaluating $\sigma$ and developing cheap-to-evaluate approximations is an exciting future research direction. --- Rebuttal Comment 1.1: Comment: Thank you for the response. Upon reading both the main paper and the authors' rebuttal, I am inclined to support its acceptance. However, I would like to acknowledge that my evaluation is limited because this paper is out of my expertise.
Summary: This paper investigates the problem of one-shot channel simulation, which can be used as lossy compression without involving quantization. A new rejection sampling algorithm called greedy Poisson rejection sampling (GPRS) is proposed. Then, a parallelized and a branch-and-bound variant is proposed. Those algorithms are analyzed regrading both correctness and runtime. Toy experiments on one-dimensional problems show that GPRS compares favourably against A* coding, the current state-of-the-art channel simulation protocol. Strengths: The proposed GPRS is properly positioned against previous and concurrent works, and contains sufficient novelty. The proposed method has much better runtime complexity compared with previous SOTA A* coding. The manuscript is well written and enough background information is provided for general readers. Weaknesses: Though this manuscript is a theoretical contribution, it would be better to discuss more about promising application scenarios and current gaps regarding both performance and efficiency, which can encourage more future works in this field and facilitate more applications of similar techniques. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: L331-334, please explain in more detail about the application scenario of efficient channel simulation protocol for multivariate Gaussians. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: see above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable comments and the concerns they raise, to which we respond below. > Though this manuscript is a theoretical contribution, it would be better to discuss more about promising application scenarios and current gaps regarding both performance and efficiency, which can encourage more future works in this field and facilitate more applications of similar techniques. We agree with the reviewer that we need to discuss promising applications for computationally efficient channel simulation/relative entropy coding to put our work into a better context. Indeed, channel simulation already has a few exciting non-trivial applications, such as model compression (Havasi et al., 2018), lossy data compression with perfect realism (Blau & Micali, 2018; Theis et al., 2022), and differential privacy (Shah et al., 2020). Our work in this area is particularly relevant as the practicality of all these works is limited precisely by the undesirable exponential scaling of the runtime of the channel simulation algorithms they use. Hence, in the camera-ready version of the paper, we will add a paragraph or two explaining and discussing this to help the reader put our work into its appropriate context. Furthermore, we will add more interesting possible future directions in the paper's final section to encourage future works, as the reviewer suggested. > L331-334, please explain in more detail about the application scenario of efficient channel simulation protocol for multivariate Gaussians. Of course! In fact, the example we give below is the primary motivation for our work; for a more complete discussion, please refer to (Flamich et al. 2020; 2022) and Theis et al. (2022). The relevance of multivariate Gaussians is their widespread use in machine learning, in particular for variational inference and diffusion models. Thus, let us assume we wish to develop a neural image compression algorithm using a variational autoencoder (VAE) and channel simulation. We choose a standard Gaussian prior and a mean-field Gaussian variational posterior for our VAE. Note that with a powerful-enough network architecture, this is not a very restrictive choice for the distributions, and the VAE will be able to adapt to the distribution of images very well. Now, for simplicity, we can train this VAE with the standard beta-ELBO on an image dataset to fit the network parameters, or we can also incorporate an adversarial loss term if we are interested in compression with realism constraints. Once we fit the model, we can compress a new image as follows, assuming that besides sharing common randomness and the VAE's prior $p$, the decoder also knows the VAE's generative network. The encoder receives a new image $x$ and uses the VAE's inference network to obtain the image's latent variational posterior $q(z | x)$. Then, the encoder uses a channel simulation protocol to encode a single multivariate Gaussian sample $z \sim q(z | x)$ using the VAE's prior $p$ as the proposal distribution. The decoder obtains a lossy reconstruction of the image $x$ by decoding the encoder's sample and pushing it through the VAE's generative network. The practicality of this scheme hinges upon the efficiency of the channel simulation protocol. Previous works have all used inefficient, exponentially scaling encoding algorithms and had to resort to small-scale experiments only or develop tricks to make the encoding procedure feasible. In contrast, we could break up encoding a single multivariate sample into a sequence of one-dimensional samples and apply branch-and-bound GPRS dimensionwise. This would be fast but incur at least 1 bit of overhead per dimension, so the codelength would be somewhat suboptimal. Thus, solving the multivariate Gaussian problem would yield a maximally efficient solution that does not have the dimensionwise codelength overhead. ## References - Blau, Y., & Michaeli, T. (2018). The perception-distortion tradeoff. In Proceedings of the IEEE conference on computer vision and pattern recognition - Flamich, G., Havasi, M., & Hernández-Lobato, J. M. (2020). Compressing images by encoding their latent representations with relative entropy coding. In NeurIPS 2020 - Flamich, G., Markou, S., & Hernández-Lobato, J. M. (2022). Fast relative entropy coding with A* coding. In ICML 2022 - Havasi, M., Peharz, R., & Hernández-Lobato, J. M. (2018). Minimal random code learning: Getting bits back from compressed model parameters. In ICLR 2018 - Shah, A., Chen, W. N., Balle, J., Kairouz, P., & Theis, L. (2022). Optimal compression of locally differentially private mechanisms. In AISTATS 2022 - Theis, L., Salimans, T., Hoffman, M. D., & Mentzer, F. (2022). Lossy compression with Gaussian diffusion. --- Rebuttal Comment 1.1: Comment: I have read all the discussion and decide to keep my score.
Rebuttal 1: Rebuttal: We thank all the reviewers for their valuable feedback on our paper, which will help us improve it significantly. We are delighted that all reviewers agree that our contributions are significant, that most reviewers (sb37, HQuL, 4NF6 and tLU5) found our exposition well-written and easy to follow, and that Reviewers prqL and tLU5 highlighted the potential impact of our work. However, the reviewers also identified several aspects of the paper that need improvement. Thus, we highlight the most common concerns and the most significant changes we propose for the camera-ready version of our paper. ## Delineate GPRS as a sampling algorithm from its use in channel simulation We presented GPRS in its full generality and did not comment much on its limitations as a general sampling algorithm, detached from its intended use for channel simulation. Hence, Reviewers sb37, prqL and tLU5 have rightly raised concerns regarding cases that often arise in practice when applying a sampling algorithm, e.g. that the Radon-Nikodym derivative is only known up to a constant or that the functions $w_P$ and $w_Q$ could be troublesome to evaluate. This question is especially interesting because A* coding, based on the A* sampling algorithm, is more generally applicable in practice just as a sampling algorithm. On the other hand, since we are ultimately motivated by applying our GPRS to neural compression via channel simulation, the set of distributions of practical interest is much smaller, consisting of the uniform, Gaussian and Laplace distributions. This is because, in neural compression, the data distribution (e.g. the distribution of natural images) is usually approximated by a latent representation following a simple distribution (e.g. Gaussian) that is then transformed by powerful neural networks to match the distribution statistics (e.g. in the case of VAEs and diffusion models). Hence, in the camera-ready version of the paper, we will add clarifying discussion regarding the limitations of applying GPRS as a general-purpose sampling algorithm and note that extending it to more general cases is future work. ## Applications of GPRS Reviewers sb37, HQuL and prqL expressed concern that the motivation or application of our method (and perhaps channel simulation in general) needs to be communicated more clearly. Developing a fast channel simulation algorithm is of great practical interest, as there have been several papers at NeurIPS, ICML, and ICLR, such as Havasi et al. (2018), Flamich et al. (2020), Agustsson & Theis (2020) and Theis et al. (2022) utilizing it. All of these works use channel simulation/relative entropy coding. Their primary limitation is the exponential scaling of the runtime of their algorithms, which could be improved by using a faster algorithm. Thus, our paper presents a significant step towards making all the abovementioned methods more practical. Hence, in the camera-ready version, we will include a paragraph or two explaining and discussing this to help the reader put our work into its appropriate context. ## Concerns regarding the implementation and numerics Related to the first point, Reviewers sb37, 4NF6, prqL and tLU5 expressed concerns regarding the implementation details and numerics of GPRS, such as: - how $w_P$ and $w_Q$ can be computed or approximated - the numerical stability and time complexity of evaluating $\sigma$ or $\sigma^{-1}$ - the intricacies of the encoding and decoding procedures While there are many subtle details, here we would like to emphasize that: - for the practically relevant channel simulation cases $w_P$ and $w_Q$ have analytic forms which are stated in Appendix G - for the runtime complexity analysis we assumed that $\sigma$ or its inverse can be evaluated in $\mathcal{O}(1)$ time - the time complexity of the numerical integration is essentially independent of the number of samples simulated, and we found it to be negligible in our experiments, as the number of samples GPRS simulates heavily dominates the time complexity anyways To address all the above and more, in the camera-ready version of the paper, we will add another section to the Appendix to detail all relevant considerations for the practical implementation of each GPRS variant. __We thank the reviewers again for their time reviewing our work and providing important feedback to improve our work. We will gladly address any further questions the reviewers might have during the discussion phase. Should we have answered a reviewer's concerns adequately, we kindly invite them to consider raising their score.__ ## References - Agustsson, E., & Theis, L. (2020). Universally quantized neural compression. In NeurIPS 2020 - Flamich, G., Havasi, M., & Hernández-Lobato, J. M. (2020). Compressing images by encoding their latent representations with relative entropy coding. In NeurIPS 2020 - Havasi, M., Peharz, R., & Hernández-Lobato, J. M. (2018). Minimal random code learning: Getting bits back from compressed model parameters. In ICLR 2018. - Theis, L. & Yosri, N. (2022) Algorithms for the communication of samples. In ICML 2022. - Theis, L., Salimans, T., Hoffman, M. D., & Mentzer, F. (2022). Lossy compression with Gaussian diffusion.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a new algorithm for lossy compression, using ideas from Poisson rejection sampling. Given a sample $y \sim P_y$, Alice wants to communicate the smallest number of bits possible such that Bob can simulate $x \sim P_{x | y}$, when Alice and Bob have access to the distribution $P_{x, y}$ (and shared randomness to allow simulation). The proposed algorithm achieves an almost optimal codelength by rejection sampling -- by introducing a temporal random variable t, Alice can communicate the hitting time $t$ for which $(t, X_t)$ lies under an appropriately defined function $\phi$. Variations of this algorithm are suggested: one variation is to parallelize using $L$ servers to reduce the runtime by a factor of $L$, and another binary search variant which gives an exponential improvement when the Radon-Nikodym derivative of $dP_{x|y} / dP_{x}$ is bounded. Strengths: - The paper has several nice ideas from information theory, and seems to be the first that can provably achieve runtimes predicted by Li and El Gamal. - The algorithm is built on A* coding, but is more robust, in the sense that while the runtime of A* coding depends on the Renyi entropy between $P_{x|y}$ and $P{x}$, the runtime of the proposed algorithm is proportional to the KL divergence between them, which may be much smaller and finite even when the Renyi entropy is infinite. Weaknesses: - The rejection sampler requires access to likelihoods of the conditional distribution and marginal, and hence it's not clear how much this can be generalized. - The inverse function in Eqn (2) is solved numerically. How would this be computed for distributions with harder CDFs? - All experiments are toy experiments on Gaussian, Binomial variables etc. - The experiments are for scalar random variables, and extending to higher dimensions would be extremely non-trivial. The authors state that extending this to multivariate Gaussians is an open problem. - The paper tries to establish connections with areas like generative modelling by arguing that source compression is useful in VAEs/ latent diffusion models, but this is probably not correct. The works of Theis et al for example use generative models for compression, not the other way around. Minor: - The name GPRS for a communication protocol is probably a poor choice given that it's an existing standard. - $\mathbb{V}$ in Theorem 3.1 is undefined. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Questions are listed in the weaknesses section. My main concern is that this is probably a poor fit for NeurIPS. The theorems are nice, but the impact to a NeurIPS audience is probably extremely limited, given that the algorithms are only feasible for explicit likelihood models and scalar random variables. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: There needs to be more discussion about the limitations of this work, especially if the authors are going to mention connections between the proposed algorithm and compression of neural networks. The connections to generative modelling are extremely weak. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their nice comments and valuable feedback and attempt to address the concerns they raise below. > The rejection sampler requires access to likelihoods of the conditional distribution and marginal, and hence it's not clear how much this can be generalized. We believe the reviewer means that in our definition of GPRS, we require exact knowledge of $dQ/dP$ (i.e. an unnormalized version will not suffice). Furthermore, we need to be able to compute the $w_P$ and $w_Q$ quantities just to numerically evaluate $\sigma^{-1}$, which makes applying GPRS difficult to more complex problems. These concerns are completely fair, and we agree that GPRS, in its current form, would be a terrible general-purpose sampling algorithm! However, we are interested in applying GPRS to solve channel simulation, which we would use as a core component in a neural data compression pipeline; where the most frequently used channel distributions are uniform, Gaussian and Laplace distributions. We will improve our presentation in the camera-ready version in two ways based on the reviewer's comment: 1. We will more clearly delineate between the relevance of using GPRS as a general-purpose rejection sampler (for which it is terrible) and for channel simulation (for which it yields theoretically optimal and SOTA results) 2. It is an exciting question if we could GPRS extended to give a general-purpose sampling algorithm by lifting the requirements on the exact knowledge of $dQ/dP$ and being able to compute $w_P$ and $w_Q$ analytically. > The inverse function in Eqn (2) is solved numerically. How would this be computed for distributions with harder CDFs? We believe here the author is asking about cases where even if we know the derivative exactly and can compute $w_P$ and $w_Q$ analytically, evaluating them might still be computationally expensive. This is a very good question; sadly, we lack a good general answer. Thankfully, this is not an issue in the practically relevant cases (i.e. uniform, Gaussian, Laplace); these quantities are all cheap to evaluate (see Appendix G for the precise analytic formulae). Thus, based on the reviewer's comment, we will discuss the concerns mentioned above regarding the practicality of the numerical integration in the camera-ready version of the paper. > The experiments are for scalar random variables, and extending to higher dimensions would be extremely non-trivial. The authors state that extending this to multivariate Gaussians is an open problem. This is correct, though we'd like to specify that standard GPRS can also be applied to multivariate Gaussians. The open problem is concerned with finding a fast algorithm with $\mathcal{O}(D_{KL}[Q || P])$ runtime for multivariate Gaussians. > The paper tries to establish connections with areas like generative modelling by arguing that source compression is useful in VAEs/ latent diffusion models, but this is probably not correct. The works of Theis et al for example use generative models for compression, not the other way around. We think this might be a misunderstanding; our work is fully in line with the work of Theis et al. (2022); in particular, we could apply "standard" GPRS to obtain a similar diffusion-model-based lossy compression algorithm. Could the reviewer please elaborate on which of our arguments they are referring to and believe to be incorrect? > The name GPRS for a communication protocol is probably a poor choice given that it's an existing standard. We thank the reviewer for pointing this out! Fortunately, we believe that the two concepts are sufficiently far apart (one being a rejection sampler, the other being a cellular communication standard) that the chances of much confusion arising should be minimal. > $\mathbb{V}$ in Theorem 3.1 is undefined. We thank the reviewer for noting this, $\mathbb{V}$ simply denotes the variance of a random variable in analogy to the expectation operator $\mathbb{E}$. We will clarify this in the manuscript. > My main concern is that this is probably a poor fit for NeurIPS. The theorems are nice, but the impact to a NeurIPS audience is probably extremely limited, given that the algorithms are only feasible for explicit likelihood models and scalar random variables. We appreciate the concern of the reviewer. However, we argue that our work is, in fact, relevant to the learned data compression community at NeurIPS, as evidenced by many related papers at NeurIPS, ICML, and ICLR, such as Havasi et al. (2018), Flamich et al. (2020), Agustsson & Theis (2020) and Theis et al. (2022). All of these works use channel simulation/relative entropy coding. Their primary limitation is the exponential scaling of the runtime of their algorithms, which could be improved by using a faster algorithm. Thus, our paper presents a significant step towards making all the abovementioned methods more practical. However, we must communicate this point more clearly in the paper. Hence, in the camera-ready version, we will include a paragraph or two explaining and discussing this to help the reader put our work into its appropriate context. ## References - Agustsson, E., & Theis, L. (2020). Universally quantized neural compression. In NeurIPS 2020 - Flamich, G., Havasi, M., & Hernández-Lobato, J. M. (2020). Compressing images by encoding their latent representations with relative entropy coding. In NeurIPS 2020 - Havasi, M., Peharz, R., & Hernández-Lobato, J. M. (2018). Minimal random code learning: Getting bits back from compressed model parameters. In ICLR 2018. - Theis, L. & Yosri, N. (2022) Algorithms for the communication of samples. In ICML 2022. - Theis, L., Salimans, T., Hoffman, M. D., & Mentzer, F. (2022). Lossy compression with Gaussian diffusion.
null
null
null
null
null
null
Characterization of Overfitting in Robust Multiclass Classification
Accept (poster)
Summary: The main worry regarding the excessive reuse of test datasets in machine learning is its potential to cause overfitting. The objective of this paper is to characterize the relationship between the amount of robust overfitting bias and three key factors: the number of classes (m), the number of robust accuracy queries (k), and the size of the test dataset (n). The main theoretical contributions of this paper consist of providing upper and lower bounds on the attainable robust overfitting bias in multiclass settings. Strengths: The paper showcases exceptional writing skills, surpassing the average standard of presentation seen in many well-written and well-structured papers. The introduction successfully establishes a clear motivation and provides a satisfactory outline of the contributions. Although the section on related works is concise, it not only acknowledges a broad range of prior research efforts but also effectively highlights the key distinctions and limitations. Despite my personal unfamiliarity with several of the cited works and some of the presented topics, I never felt lost. This is mainly attributed to the "Summary of our results" section, which introduces the various notations used in the paper, provides a very clear formulation of the addressed problem, and offers a summary of the main results and the techniques employed in their proofs. The proofs of the various theoretical results are generally clear and sufficiently detailed. However, there are occasional passages that I did not understand easily and, in my opinion, require more elaboration. For instance, the usage of the Chernoff bound for Bernoulli random variables with bias could benefit from further explanation. Weaknesses: My primary concern relates to the level of novelty and originality in the theoretical results presented in the paper, particularly concerning the techniques employed in the proofs. It is worth acknowledging that the paper serves as an extension of previous works by (Feldman et al., "The advantages of multiple classes for reducing overfitting from test set reuse," ICML'19) and (Acharya & Suresh, "Optimal multiclass overfitting by sequence reconstruction from Hamming queries," ALT'20) within an adversarial setting. However, it is important to note that the proofs in this paper largely follow a similar approach to those in the aforementioned papers, and the extension to the adversarial setting does not significantly contribute to the technical aspect. I have observed several errors in the paper (which I believe are oversights) that can be easily noticed but may lead to a misunderstanding of the contributions. One specific example is found in line 1 of Algorithm 2, where the division of n into (k-1) blocks should be denoted as (B_1...B_(k-1)) instead of (B_1, ..., b_k). It is crucial to maintain a consistent notation for the blocks using a capital B, and it should be noted that the last block is indexed as (k-1). Furthermore, on the same line, the upper and lower bounds of |B_i| need to be reversed due to the condition n > k-1. The correct bounds should be $\frac{n}{2(k-1)} \leq |B_i| \leq \frac{n}{k-1}$. These errors should be rectified to ensure the accuracy and clarity of the paper. Another aspect that I find regrettable is the lack of discussion or interpretation regarding the various bounds presented in the paper. This absence leaves a gap in the understanding of the implications and significance of these bounds. It would have been valuable to explore the practical implications of the bounds, their relationship to existing theories or frameworks, and any potential limitations or extensions that arise from them. Providing such a discussion would have enhanced the overall comprehensiveness and depth of the paper. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See the above Flaws part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: No potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for reviewing our work and affirming our presentation; this means a lot to us. Below are our responses to your comments. We hope these address your concerns. Do not hesitate to reach out if you have further questions or suggestions. **About the novelty and originality** We feel very sorry for our inadequate referencing to known proof/techniques that may make you hard to access the originality. On one hand, the upper bounds' proof technically differs from proofs in the standard case derived by [1] from the following 3 parts: 1. To take the adversarial perturbation into account, we use Bayes' formula to represent the robust accuracy of a single sample by a Bernoulli r.v. parameterized by $C(f)$. This procedure is quite trivial in the standard case. 2. As discussed in Section 2.3, we introduce the notion of hypothesis class to upper bound $C(f)$, which makes our bounds tighter. 3. Some inequality scaling techniques, e.g., Eq.(1) in Line 125 and the inequalities at the end of the proof in Lines 132-135. On the other hand, Compared to [2], our technical novelty can be summarized as follow: 1. To expand the domain from $\mathcal{X} _S$ to the entire $\mathcal{X},$ we borrow the idea of nearest neighbors algorithm. This construction is proved to be optimal (up to logarithmic factors) for fixed $n$ and $\mathcal{D} _\mathcal{X}$). 2. In the proof of Theorems 3 & 4. the introduction of corrupted classes is very elegant, without whom one may not be able to derive the distribution of $(N^S _{i,1},\dots,N^S _{i,m}),$ which is crucial in the standard case to prove the lower bounds. This is because the output $\hat{f}$ may **always** predict a label that is not robust to adversarial perturbation. In other words, some samples may be not to be counted. So the "always wrong" label, i.e. $\perp,$ not only facilitates our deduction, but also instruct us to study the distribution of $(N^S _{i,1},\dots,N^S _{i,m},N^S _{i,\perp}),$ which makes the counting well defined. 3. In addition, there is a step to deriving lower bounds on $\textbf{Pr}\\{\kappa _\mathcal{U}(\hat{f}(x))\neq\perp\\}$ in both proofs of Theorems 3 & 4, (Lines 174-177 & Lines 197-201) which has no parallel in previous works. 4. In our Algorithm $\mathcal{A}^{big}(C),$ the step 2.(b), we make $f _i(x _j)=\perp$. This construction makes the value of $A _l$ (Eq. (3) in Line 189) in our proof differs from that in [2], but the equation $A _{y _1}-A _l=W _0+M _{y _1}-M _l$ in Line 192 still holds so we can continue the proof. However, one can prove that this is not the case when the queries are designed such that $f _i(x _j)=1,$ as constructed in [2]. **About the errors** We greatly appreciate your meticulous examination of our work and we apologize for any confusion may caused by these errors. Your postulations are totally right. We assure you that these factors do not compromise the validity of our results, and we will correct them in revision. Thanks again for your time. **About the absence of discussion** Thanks for the suggestion! We would be more than happy to add more discussion about our results and practical implications, and we would like highlight some points here. Recall that the question we focus on is: Can excessive reuse of test datasets lead to overfitting in robust learning setting? The answer to this question is implied in our Theorem 1. For fixed number of classes $m$ and fixed number of test examples $n,$ the lower and upper bounds of $h_\mathcal{U}$ is at least $\tilde{\Theta}(\sqrt{k}),$ which states that one can expect better performance with more queries made to the test set. So the answer is positive. As for the practical setting, for example, in modern ML benchmarks for adversarial robustness (e.g. RobustBench), hundreds of results have been reported on same test sets (e.g. ImageNet, SIFAR100). Large-scale hyperparameter tuning and experimental trials across numerous studies likely add thousands of queries to the test data, which may substantially "improve" robust accuracy according to our results. Our results may suggest to limit the query time or develop some monitoring mechanism to avoid malicious reuse of test sets. *Reference* *[1] Feldman Et al. "The advantages of multiple classes for reducing overfitting from test set reuse." ICML 2019* *[2] Acharya Et al. "Optimal multiclass overfitting by sequence reconstruction from Hamming queries." ALT 2020*
Summary: This paper generalizes the framework of perfect reconstruction to the adversarial setting and studies how much a k-query algorithm can overfit the test set in the adversarial setting. Upper bounds and lower bounds are derived, which match in terms of the number of classes m and the number of queries k modulo logarithmic factors when the number of test examples n and the distribution are fixed. Strengths: This paper gives a careful analysis that generalizes both the upper bound and the lower bound on how much an algorithm can overfit the test set in non-robust setting to the adversarial setting. The results are novel. Weaknesses: 1. The dependence of the lower bound on the number of test examples n does not have a closed form and thus the question asked in the abstract is not fully solved. 2. It is not clear what role the perturbation set U plays in the bounds. In the adversarial setting, other than the parameters m, k and n, the radius r of the perturbation ball is also an important parameter and it is usually more informative if we know how the bounds are affected by allowing stronger attacks, i.e. a larger r. Otherwise, some intuitions on why the perturbation size is irrelevant here would be helpful. 3. This paper is in general not very well-written. For example, the term robust overfitting is confusing here: in line 9, robust overfitting [1] refers to the phenomenon that the robust test error can increase during adversarial training, but later on, e.g. in line 21 and 81, the term is used to describe the overfitting to the test set with k-query algorithms; there are not many detailed discussions on the motivation of the studied problem or what the derived bounds suggest such as how each parameter affects the bound and if robust multiclass classification requires benchmark datasets different from standard classification. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See Weaknesses. minors: 1. In line 66, should it be h_U(A) - 1/m instead? 2. In line 107, the explanation after 'i.e.' is confusing. 3. In Algorithm 1, the definition of the output is not written in a very nice way. Probably move $\forall x \in X$ outside. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time reviewing our work and your helpful suggestions. **About the dependence of lower bound on $n$** As we discuss in Lines 77-84, the term $\Phi _{\mathcal{D} _\mathcal{X}}(n)$ highly depends on the distribution of $\mathcal{D} _\mathcal{X}$ and is unavoidable. Its specific form is presented in Section 3.2. This term measures how easily to sample $n$ 'good' (for robust overfitting) test data features from $\mathcal{D} _\mathcal{X}$. For example, if the test data features are well-separated, then we have $\Phi _{\mathcal{D} _\mathcal{X}}(n)\equiv1$. But in most case the well-separated property does not hold, it can be proven that some extreme distribution of test data feature (e.g. supp($\mathcal{D} _\mathcal{X}$)$\subset\mathcal{U}(x _0)$ for some $x _0\in\mathcal{X}$) may not allow one to derive bounds w.r.t $k$ when $k\geq m$. So we keep this term in our results. **About the role $\mathcal{U}$ plays** We apologize for any confusion arising from our presentation. That is indeed an interesting issue. We first clarify that $\mathcal{U},$ or equivalently $r,$ is relevant to our results. Intuitively, bigger $r$ shrink both upper and lower bounds on $h _\mathcal{U}(k,n,m),$ and this is captured by the definitions of $C _\mathcal{H}$ and $\Phi _{\mathcal{D} _\mathcal{X}}(n)$ (see proof of Theorem 2 and definition of $\Phi _{\mathcal{D} _\mathcal{X}}(n),$ respectively). Nevertheless, we deliberately mitigate the influence of $r,$ as our focus is placed on investigating the potential for overfitting when test data is excessively reused under **specific** adversarial perturbations. Besides, our analysis suggests that the affect of $r$ is closely intertwined with $\mathcal{D} _\mathcal{X},$ which cannot be parameterized. More explicit dependence on $\mathcal{U}$ or $r$ could potentially serve as a future avenue of exploration. Thank you again for bringing this to our attention. **About the term "robust overfitting"** Sorry for any confusion caused by our oversight. Indeed, the overfitting phenomenon in [1] can also be formulated under the $k$-query framework, since the process of adversarial training can also be viewed as adaptive queries. Nevertheless, we greatly appreciate your highlighting the potential misunderstanding to this term, and we will add some discussion in revision. **"In line 66, should it be $h_\mathcal{U}(A)-1/m$ instead?"** We sincerely appreciate your meticulous review of our paper, but there is no typo here. $h_\mathcal{U}(A;S)-1/m$ measures how much $\mathcal{A}$ robustly overfits a specific test dataset $S.$ **"In line 107, the explanation after 'i.e.' is confusing."** Sorry for the confusion. The explanation means the '$\perp$' output satisfies $\perp\neq1,\dots,m$ and $\perp\neq\perp,$ see [2] for more details. **"In Algorithm 1, the definition of the output is not written in a very nice way. Probably move $\forall x\in\mathcal{X}$ outside."** Thanks again for your careful reviewing. We totally agree and will fix it in revision. *Reference* *[1] Leslie Rice Et al. "Overfitting in adversarially robust deep learning" ICML 2019* *[2] Daniel Cullina Et al. "Pac-learning in the presence of adversaries." NeurIPS 2018* --- Rebuttal Comment 1.1: Comment: Thank you for the response. I believe that adding the discussions mentioned in this and other rebuttals in the final version can greatly improve the presentation of this paper. I thereby raise my score to 5. --- Reply to Comment 1.1.1: Title: Re: Official Comment by Reviewer PNTq Comment: Thank you for your support! We will add more discussions to refine the presentation and enhance the quality of our submission.
Summary: In this paper, the authors consider the following question: Given the number of classes m, the number of robust accuracy queries k, and the number of test examples in the dataset with size n, how much can adaptive algorithms robustly overfit the test dataset? They solve this problem by giving upper and lower bounds of the robust overfitting accuracy in multiclass classification problems. Strengths: 1. Robust classification and adversary perturbation are valuable topics in the learning theory community. 2. Some adaptive algorithms are proposed and could be useful in practice. Weaknesses: Major 1. The paper structure is not very clear. The algorithms are defined after the main results. It's very confusing. 2. Line 56-69: The definition of k-query is not reader-friendly. It's better to have some examples and figures. 3. Line 63-65: The assumption that the labels are uniformly distributed is very restricted and not general. It makes the study has limited practical and theoretical contribution. 4. The gap between the upper and lower bound is large, which makes the two bounds have limited theoretical and practical contributions. 5. In Lines 49-52, the perturbation is defined with radius r. However, in all main results, they don't contain r, and have no restriction on r. The result would be different between a tiny and a huge r. Minor 1. The literature section needs a clearer analysis of the relationship among adaptive algorithms, overfitting, robustness, and multi-classification. In addition, the motivation is not fully explained. 2. The title "Characterization of Overfitting in Robust Multiclass Classification" is too broad. It's better to be more specific with the adaptive algorithms and adversarial settings. 3. What's mathmatical definition for \O, \Omega, and \Theta? They are not defined in the paper. 4. The main result, Theorem 1 (lines 74-76), is labeled as informal. In addition, it's not proven and not reader-friendly. 5. The paper lacks practical experiments to support the main results. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: 1.What's mathmatical definition for \O, \Omega, and \Theta? They are not defined in the paper. 2. Line 56-69: The definition of k-query is not reader-friendly. It's better to have some examples and figures. Could you provide a detailed explanation? 3. In Lines 49-52, the perturbation is defined with radius r. However, in all main results, they don't contain r, and have no restriction on r. The result would be different between a tiny and a huge r. Could you explain the relation between accuracy and r? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: This paper doesn't include limitations and the negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time reviewing our work. **About the paper structure** We focus on the question whether adaptively excessive reuse of test data lead to overfitting in robust learning. In Section 2 , we transform this problem into a problem of studying the value of $h _\mathcal{U}(k,n,m),$ and our informal main result (Theorem 1) is a combination of our Corollary 1 and Theorems 3 & 4, which study the upper and lower bounds on $h _\mathcal{U}(k,n,m)$ and are all proven strictly. The reason why we designed the algorithms is to prove lower bounds on $h_\mathcal{U}(k,n,m).$ So we do not present the algorithms in our main results section. **About the definition of $k$-query** Sorry for the confusion. We would like to use our algorithm $\mathcal{A}^{small}$ (k>1) as an example to explain the definition of $k$-query. As we defined in Lines 56-57, $\mathcal{A}^{small}$ first makes $k$ queries, namely $f _1,\dots,f _k$ (see Algorithm 2 for specific definitions) on the test set $S.$ Note that these queries are designed in advance hence are independent to $S$. Then $\mathcal{A}^{small}$ receives the values of $\text{Acc} _\mathcal{U}(f _1;S),\dots,\text{Acc} _\mathcal{U}(f _1;S),$ and utilizes these values (robust accuracies) to construct $\mathcal{A}^{small}$'s final output $\hat{f}.$ **About the assumption of uniformly distributed labels** Following [1], we make this assumptions since the algorithms have no prior knowledge about the test labels (as we state in Lines 63 & 64). Many experiments have also been conducted under this assumption, such as [3][4]. We believe this definition reflects the fact that excessive reuse of test data leads to overfitting in robust learning. **About the gap of bounds** Theorem 1 shows that our upper bounds and lower bounds are matching up to a logarithmic factor for any fixed number of test examples $n$ and feature distribution $\mathcal{D}_\mathcal{X},$ which is optimal at present. **There is no $r$ in main results** It needs to be clarified that our results definitely contain $r.$ However, to make our results more intuitive, we scale our bounds without changing its order. Specifically, in our full upper bounds, Theorem 2, $r$ affects the value of $C _\mathcal{H},$ and we scale it by the fact $C _\mathcal{H}\leq1$ in Corollary 1, that is why there is no $r$ in Theorem 1's upper bounds. On the other hand, in our formal lower bounds, namely Theorems 3 & 4, $r$ affects the value of $\Phi _{\mathcal{D} _\mathcal{X}}(n)$ explicitly (see Lines 152-154 for the definition of $\Phi _{\mathcal{D} _\mathcal{X}}(n)$, and this term is preserved in our Theorem 1's lower bounds. **The definition of $O,\Omega,\Theta$** We apologize for any confusion caused by our oversight in assuming familiarity with the definition. The $O,\Omega,\Theta$ notations, as known as the big O notations, are asymptotic notations. They are defined as follows: Given $f:\mathbb{R}\to\mathbb{R} _+$ and $g:\mathbb{R}\to\mathbb{R} _+$ we write $f=O(g)$ if there exist $x _0,\alpha\in\mathbb{R} _+$ such that for all $x>x _0$ we have $f(x)\leq\alpha g(x).$ We write $f=\Omega(g)$ if there exist $x _0,\alpha\in\mathbb{R} _+$ such that for all $x>x _0$ we have $f(x)\geq\alpha g(x).$ The notation $f=\Theta(g)$ means that $f=O(g)$ and $g=O(f).$ Finally, the notation $f=\tilde{O}(g)$ means that there exist $k\in\mathbb{N}$ such that $f(x)=O(g(x)\log^k(g(x))).$ The notation $f=\tilde{\Omega}(g)$ and $f=\tilde{\Theta}(g)$ are defined analogously. **The paper lacks practical experiments to support the main results.** Since [2] has already indicated the presence of overfitting in adversarial training. Hence our paper is devoted to a certain theoretical question related to this phenomenon, that is, whether adaptively excessive reuse of test data leads to overfitting in robust learning. Our results answer this question in the affirmative and suggest that one can expect better performance with more queries made to the test data, which aligns with the experimental results. *Reference* *[1] Vitaly Feldman Et al. "The advantages of multiple classes for reducing overfitting from test set reuse." ICML 2019* *[2] Leslie Rice Et al. "Overfitting in adversarially robust deep learning" ICML 2019* *[3] Cynthia Dwork Et al. "Generalization in Adaptive Data Analysis and Holdout Reuse" NIPS 2015* *[4] Cynthia Dwork Et al. "The reusable holdout: Preserving validity in adaptive data analysis" Science 2015* --- Rebuttal Comment 1.1: Comment: Thanks very much for the replies and answers. 1.About the assumption of uniformly distributed labels As the proposed algorithm is not a Bayesian method, the assumption that the prior distribution of test labels is uniform is very strong. It makes the study has limited practical and theoretical contribution. In the classification community, a more general distribution of test labels is a common choice. 2.About the gap of bounds For example, in Theory, 1, lines 74,75, the left side contains the term \Phi_{D_X}(n). This term is only stated as <=1. In this way, it could be much smaller than 1, e.g., o(1). So, the gap could be much larger. 3.There is no r in the main result As the relation between r and C_H is not clearly stated in the theorem, so it's hard to follow the r issue. 4.The paper lacks practical experiments to support the main results. For this issue, I agree with Reviewer 5K3z's comment," I understand that your work is theoretical. However, adding illustrations, at least on simulated data would bring value to the paper. It would ease the reading of the paper for a practitioner and could give ideas for future directions of research." Therefore, I maintain my rating. --- Reply to Comment 1.1.1: Title: Re: Official Comment by Reviewer YY54 Comment: Thanks for your replies. In response to your initial comments, we carefully addressed each concern raised. Below, we provide a point-by-point response to your feedback: 1. About the assumption of uniformly distributed labels We'd like to kindly point out a potential oversight that we make this assumption **since** our algorithms are not Bayesian. A Bayesian method refers to an approach starting with prior distribution. It is precisely because we have no prior knowledge about the distribution of data labels, we assume they are uniformly distributed. Furthermore, we'd like to highlight again that this assumption is following [1]. 2.About the gap of bounds Indeed, there might be situations as you've described, yet we would like to reiterate that this is contingent on the distribution of the data (see explicit form of $\Phi _{\mathcal{D} _{\mathcal{X}}}(n)$ is presented in Lines 152-154). For example, when $k>m$ and if the distribution of $\mathcal{D} _\mathcal{X}$ is very ill-conditioned e.g., $\text{supp}(\mathcal{D} _\mathcal{X})\subset\mathcal{U}(x _0)$ for some $x _0\in\mathcal{X},$ no algorithm would perform better than majority vote. In this case only a trivial lower bound can be derived, and we do have $\Phi _{\mathcal{D} _{\mathcal{X}}}(n)=o(1).$ That is why we consider this term as "unavoidable". Nevertheless, our lower bounds match our upper bounds for fixed $\mathcal{D} _{\mathcal{X}}$ and $n.$ 3.There is no r in the main result Recall that our study focuses on studying $h _\mathcal{U}(k,n,m)$ under a given, hence fixed, adversarial perturbation, meaning our emphasis is not on $\mathcal{U}$ or $r$. The significance of $C _\mathcal{H}$ is that the upper bounds will become tighter if the algorithms are based on smaller hypothesis class. 4.The paper lacks practical experiments to support the main results. We focus on a theoretical problem related to robust overfitting, which has been indicated in numerous works. Furthermore, we would like to highlight that NeurIPS has embraced a significant number of purely theoretical contributions, e.g. [2][3[4]. In light of this, we kindly request that the reviewer consider the broader context within which our paper aligns, and we believe our contribution is in line with NeurIPS's diverse range of accepted works. *Reference* *[1] Vitaly Feldman Et al. "The advantages of multiple classes for reducing overfitting from test set reuse." ICML 2019* *[2] Ron Amit Et al. "Integral Probability Metrics PAC-Bayes Bounds" NIPS2022* *[3] Dimitris Fotakis Et al. "Linear Label Ranking with Bounded Noise" NIPS2022* *[4] Han Shao Et al. "A Theory of PAC Learnability under Transformation Invariances" NIPS2022* --- Reply to Comment 1.1.2: Title: Looking Forward to Your Response Comment: We want to inquire if our rebuttal adequately addressed your concerns, as we value your feedback. We are open to further discussion if needed. best regards,
Summary: This paper considers the problem of learning from data while being robust to the possible transformations of these data. A common practice in machine learning is to split data into a training and a test set. The latter is a holdout to evaluate the performance of the algorithm. However, recent studies have shown that reusing many times this holdout set leads to overfitting in non-robust settings. This paper questions the problem of adaptivity in an adversarial/robust setting. Strengths: The authors present a very theoretical work on robustly overfitting the test set of adaptative algorithms. They show an excellent understanding of the theory behind this problem. Remarkably, the upper bounds in the Theorem 1 match the ones of the literature when $\mathcal{U} = \mathcal{I}$ (no transformation of the set, i.e. no adversarial setting). Furthermore, for a fixed size of $S$, the test set, whose features are i.i.d. according to a fixed distribution, the upper and lower bounds match up to a logarithmic factor. Weaknesses: The paper is hard to follow for someone unfamiliar with the topic like myself. The Neurips conference gathers people from a broad audience, and this must be taken into account. In particular, no experimental part in the paper limits its impact on the community. Some numerical experiments could help understand how this work can be applied and its limits for future directions. Adding a toy numerical example could also motivate us why we should care about the question the authors ask in the abstract. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Suggestions: - recall what are $O, \Omega$ and $\Theta$ - define $\widetilde{O}, \widetilde{\Omega}, \widetilde{\Theta}$ even if it seems to be trivial for the authors, - add an experimental section with experiments at least on simulated data and, better, on real datasets to illustrate the different theorems. Question: - are these $\widetilde{O}, \widetilde{\Omega}, \widetilde{\theta}$ a classical tool in your community? Why not expose the full bound with the logarithm factor? Does it help the interpretation? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: No limitations are declared in the paper. The paper is theoretical and seems far from having a possible negative impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your for reviewing our paper. It is worth acknowledging that numerous studies have already indicated the presence of overfitting phenomena in adversarial training, e.g.,[1]. To this end, our paper is devoted to a certain **theoretical** question related to this phenomenon, that is, whether adaptively excessive reuse of test data leads to overfitting in robust learning. Our results answer this question in the affirmative and suggest that one can expect better performance with more queries made to the test data. The $O,\Omega,\Theta$ notations, as known as the big O notations, are asymptotic notations. we occasionally use them to clarify the main results. They are common in computer science & machine learning community. They are defined as follows: Given $f: \mathbb{R} \to \mathbb{R} _{+}$ and $g: \mathbb{R} \to \mathbb{R} _{+}$ we write $f=O(g)$ if there exist $x _0,\alpha\in\mathbb{R} _+$ such that for all $x>x _0$ we have $f(x)\leq\alpha g(x).$ We write $f=\Omega(g)$ if there exist $x _0,\alpha\in\mathbb{R} _+$ such that for all $x>x _0$ we have $f(x)\geq\alpha g(x).$ The notation $f=\Theta(g)$ means that $f=O(g)$ and $g=O(f).$ Finally, the notation $f=\tilde{O}(g)$ means that there exist $k\in\mathbb{N}$ such that $f(x)=O(g(x)\log^k(g(x))).$ The notation $f=\tilde{\Omega}(g)$ and $f=\tilde{\Theta}(g)$ are defined analogously. *Reference* *[1] Leslie Rice Et al. "Overfitting in adversarially robust deep learning" ICML 2019* --- Rebuttal Comment 1.1: Comment: I thank you to have answered my questions. I understand that your work is theoretical. However, adding illustrations, at least on simulated data would bring value to the paper. It would ease the reading of the paper for a practitioner and could give ideas for future directions of research. Therefore, I maintain my rating. --- Reply to Comment 1.1.1: Comment: Thank you for your reply. However, it seems there might have been a misunderstanding in my previous response. We focus on a theoretical problem related to robust overfitting, which has been indicated in numerous works. Furthermore, we would like to highlight that NeurIPS has embraced a significant number of purely theoretical contributions, e.g. [1][2][3]. In light of this, we kindly request that the reviewer consider the broader context within which our paper aligns, and we believe our contribution is in line with NeurIPS's diverse range of accepted works. Thank you again for your time. *Reference* *[1] Ron Amit Et al. "Integral Probability Metrics PAC-Bayes Bounds" NIPS2022* *[2] Dimitris Fotakis Et al. "Linear Label Ranking with Bounded Noise" NIPS2022* *[3] Han Shao Et al. "A Theory of PAC Learnability under Transformation Invariances" NIPS2022*
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper studies bounds on the maximal difference between the robust accuracy of a classifier on the test set. Here, robustness means when each instance is allowed to move within a given radius within an $L_p$ ball, and the maximal difference is obtained with respect to $1/m$, where $m$ is the number of classes. The classifier's accuracy is obtained through querying an algorithm, and lower/upper bounds are obtained in terms of the number of queries made, the number of elements in the test set and the number of classes. Strengths: * A concise paper that, for once, does not have tons of supplementary material. * A rather rigorous treatment of the problem, with the paper consisting almost entirely of providing the proofs of the results. I think the results presented are correct (but this is not in my main area of expertise, so I could have missed something, and it would be necessary to be checked by a more expert reviewer foxued on this area of research) Weaknesses: * Better motivation of the results through illustration/practical problem mention: the paper focuses on providing a rigorous bound proof for a particular setting (robust overfitting), and assumes that the reader is convinced of the importance of the problem. In this sense it is quite focused, and it would have been nice for the reader to have at least some indication of the practical setting where this result could be used, or even better a little illustrative (synthetic) example of the situation one would like to deal with. This could be done, e.g., by dropping Lemma 1 proof if this one is available elsewhere (Lemma 4 in [6], according to the paper), as it takes almost one pages. * Clarifications about the assumptions and their limitations: in order to obtain the provided results, the paper makes some key assumptions whose limitations and need should be discussed in more details. Two main examples of this are the following: * Well-separatedness of the classes is mentioned P3, L3 (note: it is not mentioned this way in reference [14]) seems equivalent to assume a dceterministic relationship between input and classes, with furthermore the requirement to be separated by "large margins", so basically assuming that one is working under a version space assumption. There would be a need to clarify that, to discuss how realistic this is, and how it can impact the provided results * Uniformity of the labels: another assumptions is that the output labels are perfectly uniformly distributed. This is unrealistic in most practical settings, and it is not clear how the results would change in case of imbalanced classes (goign from a slight departure of this prior information to a large one). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Questions --- (wrap up weakness points for some) * Would it be possible to provide a slight illustration of the discussed results, or to give pointers to practical uses of these results? * The idealistic assumptions made in the paper (well-separatedness, label uniformity) are rarely met in practice. It is not clear how limiting this is, would one apply the obtained bounds in a practical setting? * I did not find in the paper a discussion about the quality of the provided bounds: can they expected to be tight? How useful are they likely to be in practice? Suggestions --- * Some typos remain in the paper (e.g., in Theorem 2: an perturbation, $b=k ln(n+1)1$, L171: "we will show it later than") * Some parts of the proofs and technical aspects could be clarified by adding some steps that I missed while reading it. As examples: * P2, L39: it may be obvious, but what is the "closure" here (are we speaking about a specific closure operator?) * P5, L131: the derivation of $m\epsilon \geq 2 \geq 2 \mathcal{C}_\mathcal{H}$ is not immediate to me, maybe add a step? * P6, L161-L162: I do not see how we can obtain the numbers of correctly predicted labels only depend on the $N_m$, and also why it should only depends on $N_1,N_2$? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See weaknesses and questions comments about the initial assumptions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing our work and your considerate suggestions. Here are responses to your concerns. **About the results.** We apologize for our presentation. Motivated by [1] and [2], the question we focus on is: Can excessive reuse of test datasets lead to overfitting in robust learning setting? The answer to this question is implied in our Theorem 1. For fixed number of classes $m$ and fixed number of test examples $n,$ the lower and upper bounds of $h_\mathcal{U}$ is at least $\tilde{\Theta}(\sqrt{k}),$ which states that one can expect better performance with more queries made to the test set. So the answer is positive. As for the practical setting, for example, in modern ML benchmarks for adversarial robustness (e.g. RobustBench), hundreds of results have been reported on same test sets (e.g. ImageNet, SIFAR100). Large-scale hyperparameter tuning and experimental trials across numerous studies likely add thousands of queries to the test data, which may substantially "improve" robust accuracy according to our results. Our results may suggest to limit the query time or develop some monitoring mechanism to avoid malicious reuse of test sets, which, however, is far beyond the scope of this article. All in all, thank you for the suggestion, we will add some discussion about this part in revision. **About the assumptions** Sorry again for the confusion caused by our presentation. 1. Actually we do not make the well-separated assumption. We mention this concept to emphasize that the term $\Phi_{\mathcal{D}_\mathcal{X}}(n)$ is unavoidable if we make no assumption on the distribution of the data feature. In fact, one can easily show that under the well-separated assumption, our setting degenerates into standard perfect label reconstruction problem. 2. Following [2], we assume the label to be uniform distributed since the algorithms have no prior knowledge about the test labels (as we state in Lines 63 & 64). Many experiments have also been conducted under this assumption, such as [3][4]. **About the quality of our bounds** The quality of our bounds are discussed in Lines 83-84: "Note that for a fixed size of $S$ whose features are i.i.d. according to a fixed distribution $\mathcal{D}_\mathcal{X}$, the upper and lower bounds are matching up to a logarithmic factor." **About the typos and suggestions on proofs and technical aspects** We appreciate your careful review of the article and point these typos out, we will fix them in our final revision. The following is our response to your suggestions one by one: 1. Thank you for your suggestion, we will define the "support" more explicitly in revision. 2. The derivation $m\epsilon\geq2$ is obtained by setting $\epsilon=\frac{2b}{n}$ (see the second formula in Theorem 2), and we do miss it in our proof. We will perfect this proof in revision. 3. The numbers of correctly predicted labels only depend on $N_1$ and $N_2$ follows from the construction of our Algorithm 1, whose output $\hat{f}$ only predicts labels $1$ and $2.$ *Reference* *[1] Vitaly Feldman Et al. "Open problem: How fast can a multiclass test set be overfit?" Colt 2019* *[2] Vitaly Feldman Et al. "The advantages of multiple classes for reducing overfitting from test set reuse." ICML 2019* *[3] Cynthia Dwork Et al. "Generalization in Adaptive Data Analysis and Holdout Reuse" NIPS 2015* *[4] Cynthia Dwork Et al. "The reusable holdout: Preserving validity in adaptive data analysis" Science 2015* --- Rebuttal Comment 1.1: Title: Thanks for the answer, still unclear about the practical setting. Comment: Dear authors, Thank you very much for your answers and clarification. I have also read the other reviewer comments, and a number of them share the same concerns about the practical impact of the presented results: while I am not at all against fully theoretical results, I do think that outside of pure mathematics (and Neurips is not about pure mathematics) one should at least have an idea of how the theory should serve the practice, or why filling a theoretical hole may be helpful. In light of this, I am still yet not fully convinced of the arguments brought forward, which mainly conist (if I am correct) that it is less robust to always test on the same test set (for it can lead to optimize for this test sets). Overall this makes sense, but in my opinion this is largely a problem created by academic ML and the way algorithms are benchmarked, and in this sense it is hard to consider theoretical results about this being of practical importance in actual ML applications. Most ML practitioners would not rely on a single test set to assess the value of a method. Right now I will keep my score, which is positive as I appreciated the paper that looks at waht appears to be a hard problem, but I must say that for the moment I am unable to appreciate its significance. Best regards --- Reply to Comment 1.1.1: Title: About the practical setting Comment: Dear Reviewer JGWq, Thank your for your response! We kindly emphasize again that, as highlighted in **About the results** section in our rebuttal, our results do have practical scenery, e.g., RobustBench[1] (A authoritative benchmarking platform designed to evaluate the robustness of machine learning models), where the test sets are fixed and thousands of results have been reported on them. As we discussed, our theory could potentially offer insights for the design of the monitoring mechanism of platforms, or at the very least, serve as a reminder. It is also noteworthy that much like the those in RobustBench, benchmarks in modern machine learning are in an adaptive fashion, where new models can be dependent on the previous models. This is the reason we choose our setting, and we think our contribution is in line with NeurIPS's diverse range of accepted works. Best, *Reference* *[1] https://robustbench.github.io/*
Summary: The authors present near-matching lower and upper bounds for the robust accuracy of an adaptive algorithm having a budget of k accesses to the oracle of accuracy on a test set of size n and m classes. Strengths: The proof is really clear and pedagogical. The related work is clear. Weaknesses: The problem is not well sold nor introduced. I had to look at the problem formulation to understand what the authors were intending to do. While related work is abundant, more space can be used for explaining what are the goals of the paper for new readers. And the conclusion could be more than just a summary, starting future work. Also, as the authors are humble, they are borrowing tools from several existing proofs so it is not clear whether this is a breakthrough or incremental work. Still, it is nicely put. Minor comments: "binominal" => binomial Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: No Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the time you've dedicated to reviewing our work. We would like to highlight the originality of our proof techniques, which are outlined as follows. On one hand, the upper bounds' proof technically differs from proofs in the standard case derived by [1] from the following 3 parts: 1. To take the adversarial perturbation into account, we use Bayes' formula to represent the robust accuracy of a single sample by a Bernoulli r.v. parameterized by $C(f)$. This procedure is quite trivial in the standard case. 2. As discussed in Section 2.3, we introduce the notion of hypothesis class to upper bound $C(f)$, which makes our bounds tighter. 3. Some inequality scaling techniques, e.g., Eq.(1) in Line 125 and the inequalities at the end of the proof in Lines 132-135. On the other hand, Compared to [2], our technical novelty can be summarized as follow: 1. To expand the domain from $\mathcal{X} _S$ to the entire $\mathcal{X},$ we borrow the idea of nearest neighbors algorithm. This construction is proved to be optimal (up to logarithmic factors) for fixed $n$ and $\mathcal{D} _\mathcal{X}$). 2. In the proof of Theorems 3 & 4. the introduction of corrupted classes is very elegant, without whom one may not be able to derive the distribution of $(N^S _{i,1},\dots,N^S _{i,m}),$ which is crucial in the standard case to prove the lower bounds. This is because the output $\hat{f}$ may **always** predict a label that is not robust to adversarial perturbation. In other words, some samples may be not to be counted. So the "always wrong" label, i.e. $\perp,$ not only facilitates our deduction, but also instruct us to study the distribution of $(N^S _{i,1},\dots,N^S _{i,m},N^S _{i,\perp}),$ which makes the counting well defined. 3. In addition, there is a step to deriving lower bounds on $\textbf{Pr}\\{\kappa _\mathcal{U}(\hat{f}(x))\neq\perp\\}$ in both proofs of Theorems 3 & 4, (Lines 174-177 & Lines 197-201) which has no parallel in previous works. 4. In our Algorithm $\mathcal{A}^{big}(C),$ the step 2.(b), we make $f _i(x _j)=\perp$. This construction makes the value of $A _l$ (Eq. (3) in Line 189) in our proof differs from that in [2], but the equation $A _{y _1}-A _l=W _0+M _{y _1}-M _l$ in Line 192 still holds so we can continue the proof. However, one can prove that this is not the case when the queries are designed such that $f _i(x _j)=1,$ as constructed in [2]. We will add more discusstion about this part in revision. Thank you again for checking this out. *Reference* *[1] Vitaly Feldman Et al. "The advantages of multiple classes for reducing overfitting from test set reuse." ICML 2019* *[2] Acharya Et al. "Optimal multiclass overfitting by sequence reconstruction from Hamming queries." ALT 2020* --- Rebuttal Comment 1.1: Comment: Thanks. I also enjoyed the discussion between the authors and reviewer WwD9. I think that this work should be more sold, more contextualized, get more perspective to reach a broader audience. (Hyperparameter tuning is supposed to be made on a validation set, not the test set. But indeed when a test set is public you never know what made practitioners choose a hyper-parameter compared to another. You could for example mention that test set could leak into the hyper-parameters.) --- Reply to Comment 1.1.1: Title: Re: Official Comment by Reviewer UhYx Comment: Thanks for the support and your considerate suggestions! We will add more discussion to perfect our work.
null
null
null
null
LEACE: Perfect linear concept erasure in closed form
Accept (poster)
Summary: The paper presents a concept erasure procedure that also takes into account the distortion of the representations during the debiasing process. To this end, the paper builds on several works on iterative nullspace projection approaches and incorporates a regularization component. First, LEACE improves upon previous projection approaches by showing that E[X|Z]=E[X] is sufficient to protect information against a linear adversary. Using this result, the paper presents a closed-form solution for the objective function. The paper shows experiments on debiasing and interoperability setups. The results showcase that LEACE can achieve similar levels of debiasing performance with the learned representations having much less distortion (measured in terms of its rank). Strengths: Strengths: 1. The paper proves the interesting result that achieving the following condition: E[X|Z]=E[X] results in preventing any linear adversary from extracting information about the protected attribute from a representation set. 2. The paper presents a closed-form optimal solution to linear debiasing with the L2 regularization constraint. 3. The paper shows experiments in several debiasing and interpretability setups. The results show that LEACE is able to achieve debiasing performance at par with previous approaches while retaining significant information (in terms of rank). Weaknesses: Weakness: 1. Is it unclear from the paper why the MSE reconstruction loss E[X-\hat{X}] is a good measure of information being retained? Ideally, I believe that the goal of concept erasure is to retain within-class information for each of the protected class categories. For such a requirement, there can be a trivial baseline where E[X|Z]=E[X] is achieved by normalizing the data within each class to have the same mean. Does this baseline work at all in terms of debiasing? If yes, what are the advantages of LEACE over this baseline? If no, is E[X|Z]=E[X] the right condition to be achieved? 2. The derived closed-form result looks very similar to the result in RLACE. It would be interesting to have an analytical discussion about the differences and their implications on debiasing performance. 3. Although having minimal edits is one of the key focuses of this paper, reconstruction loss is not reported in the results section. Simply reporting the rank is not convincing enough to evaluate the efficacy of the proposed approach in the paper. How does the reconstruction loss compare with other linear debiasing methods? If rank is the right metric to look at instead of the MSE loss, the paper needs more justification about why that is the case. 4. The fair representation learning framework proposed in this paper has been studied by various works previously. For example, LAFTR[1] and [2] have a similar reconstruction loss along with a classifier and adversary. The paper should have these baselines by simply removing the classifier component and having just the reconstruction and a linear adversary loss. This would be useful in understanding the efficacy of projection-based approaches over adversarial ones. 5. The experimental section is fairly weak at the moment. For example, in Table 1, why is LEACE not compared with RLACE and INLP in this setup? The authors do not report the performance of word embedding debiasing, which is one of the most common benchmarks for evaluating concept erasure approaches. 6. The organization/writing of the paper can be improved in several areas. Broadly, some of the theoretical results can be condensed and more context about the debiasing setup would be helpful for readers not too familiar with concept erasure literature. See some minor additional comments in the suggestions/limitations section. [1] https://arxiv.org/pdf/1802.06309.pdf [2] https://www.cs.toronto.edu/~toni/Papers/icml-final.pdf Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See the weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Line 45: “taking values in Rd”: what do values refer to here? I get that it refers to X but should be clearly stated. Line 53: could not find the 4th variable defined in Section 2.1. Is it referring to the loss function? Line 55: it's unclear why the family of loss functions needs to be used here. The guarding is defined w.r.t. to a single loss function. The framing of different lemmas can be smoothened out to make it more straightforward. More context about the functioning of the baseline approaches would be helpful. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > 1. Is it unclear from the paper why the MSE reconstruction loss E[X-\hat{X}] is a good measure of information being retained? We agree with the reviewer that mean squared Euclidean distance is a limited measure. Luckily, we have since proven that LEACE is highly robust to the specific distance function used. Specifically, we prove that LEACE simultaneously minimizes the entire family of squared (pseudo-)norms of the form $\\| x \\|_{\mathbf M}^2 = x^T \mathbf{Mx}$ for some p.s.d. matrix $\mathbf M$. Intuitively, this means that the “weight” or “importance” assigned to each direction in $\mathbb R^d$ does not matter. See our general rebuttal for a proof of this result. We agree that retention of within-class information is necessary for practically useful concept erasure, but it is not sufficient for our purposes. This is because our primary aim is to minimize damage to the network’s input-output behavior, without retraining or fine tuning its parameters. Arguably a good proxy for this is the Fisher information metric, which locally approximates the KL divergence between the network’s final output before and after perturbing its activations. Since this metric is p.s.d., LEACE minimizes it “out of the box,” without any change to its formula. Thus, LEACE can indeed be viewed as approximately minimizing damage to the network’s behavior, even though it is architecture-agnostic. > For such a requirement, there can be a trivial baseline where E[X|Z]=E[X] is achieved by normalizing the data within each class to have the same mean. Does this baseline work at all in terms of debiasing? If yes, what are the advantages of LEACE over this baseline? Theorem 3.1 does indeed imply that the normalization technique you suggest ensures linear guardedness. The primary benefit of LEACE over mean-normalization is that mean-normalization requires access to ground truth concept labels at test time. This is because one needs to know the label in order to know which class centroid $\mathbb E[\mathrm X | \mathrm Z = i]$ should be subtracted from a given data point $\boldsymbol x$. We often lack ground-truth labels at test time, such as when de-biasing a classifier w.r.t. a protected attribute. Attempting to “guess” the label from observed attributes is likely to be noisy, and may itself be an instance of bias or unfairness. > 2. The derived closed-form result looks very similar to the result in RLACE. It would be interesting to have an analytical discussion about the differences and their implications on debiasing performance. We presume the reviewer is referring to Proposition 3.1 in Ravfogel et al. (2022) Linear Adversarial Concept Erasure, which proves that the orthogonal projection matrix onto $\mathrm{colsp}(\Sigma_{XZ})^\perp$ can be used to achieve guardedness w.r.t. linear regression models. Indeed, Theorem 4.1 implies that this formula achieves guardedness w.r.t. linear classifiers as well. But unlike LEACE, it is not surgical, since it does not take into account anisotropy in the covariance matrix of X (see Figure 1 in the rebuttal PDF), nor does it account for the possibility that X is not centered at the origin. > 3. Although having minimal edits is one of the key focuses of this paper, reconstruction loss is not reported in the results section. We thank the reviewer for noting this omission. We have therefore included a plot in our rebuttal PDF (Figure 2). We plan to replace Figure 1 in the current submission with this new plot in the camera-ready version. > The fair representation learning framework proposed in this paper has been studied by various works previously. For example, LAFTR[1] and [2] have a similar reconstruction loss along with a classifier and adversary. The paper should have these baselines by simply removing the classifier component and having just the reconstruction and a linear adversary loss. We don’t view our work as being directly comparable to the cited works “LAFTR” (Madras et al. 2018) and Zemel et al. (2013), which are both comprehensive proposals for learning fair representations. By contrast, LEACE and concept scrubbing are modular tools which can be applied in a post hoc manner to pre-trained models of all kinds. > The experimental section is fairly weak at the moment. For example, in Table 1, why is LEACE not compared with RLACE and INLP in this setup? We exclusively compare to SAL in the concept scrubbing experiment because it is the only applicable prior art which achieves perfect linear guardedness (by Theorem 4.1), and because it is efficiently scalable to very large datasets, such as the Pile and RedPajama pre-training corpora. This scalability is due to the fact that, like LEACE, SAL only depends on covariance and cross-covariance statistics which can be computed in a streaming fashion. RLACE and INLP, by contrast, require expensive gradient-based optimization and would be much more computationally intensive and time-consuming to run for this large-scale experiment. > could not find the 4th variable defined in Section 2.1. Is it referring to the loss function? $\mathfrak L$ is a family of loss functions $\mathcal L$, and its definition was missing from Sec. 2.1. We will correct this in the camera-ready version. > it's unclear why the family of loss functions needs to be used here. The guarding is defined w.r.t. to a single loss function. In our work, we define guardedness w.r.t. a family of loss functions, since our Theorem 3.1 applies to the family of all convex losses. > lack of experiments on uncontextualized word embeddings. While the suggestion is well taken, we believe this is beyond the scope of the current paper. We believe the current content and focus fully occupies a complete paper, and that further expansions to problems outside of the deep classification context would make the paper unwieldy and would require comparing against two different lines of literature simultaneously.
Summary: This paper introduces a novel method to guard pretrained representations of deep neural networks from linear recovery of sensitive attributes Z. The notion of linear guardedness comes from the previous literature, while this paper proposes a simple characterisation when it takes place, which allows to dramatically simplify the algorithm as well as improve the performance. Unlike the previous methods such as INLP, the LEACE projection matrix has rank d - k, where d is the dimension of the representation at hand, and k is the number of values Z can take (i.e. 2 in the binary case). This allows to guard the sensitive attribute with minimal losses. The authors confirm this with two experiments: 1) guarding gender in bios from De-Arteaga et al. 2) amnesic probing from Elazar et al. pros: - simple and effective method that outperforms previous literature - well written, and the equivalence theorems are easy to follow - the experiment in Figure 3 is fascinating, in particular, the conclusion that the information is concentrated in layer 11 cons: - lack of error bars. For example, in the experiment corresponding to Figure 1, what happens if we feed a limited data into the LEACE algorithm and test guardedness on the rest? - in the experiments the data to which LEACE is applied is the same as it is tested on. It is not tested whether the concept can be erased using dataset that is collected independent of the downstream task. - the conclusion in section 5.3 is relevant to interpretation methods such as Kim et al "Interpretability beyond attribution: TCAV", who are also concerned with what kinds of information a neural network extracts at each layer. On the contrary to your claim, they report that the amount of "concept information" does not reduce as data passes from layer to layer, see Figure 5. I wonder how it compares to you measurements in the case of POS concept. Their method is very simple and should be easy to implement. - it is also not entirely clear how it affects the remaining information. For example, if I look at two identical bios which differ only in the gender of a person, how would their representations compare after you have removed the gender concept with LEACE? minor comments: - Figure 3: ramdom - l306: which a we Strengths: - Weaknesses: - Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > lack of error bars. In Figure 1, the vertical lines crossing through each data point are 95% confidence intervals. We apologize that we did not make this clearer in the original submission and we intend to replace the error lines with a translucent error ribbon in the camera-ready version. > For example, in the experiment corresponding to Figure 1, what happens if we feed a limited data into the LEACE algorithm and test guardedness on the rest? …in the experiments the data to which LEACE is applied is the same as it is tested on. It is not tested whether the concept can be erased using dataset that is collected independent of the downstream task. We agree that examining the generalization properties of LEACE is an important research direction. There are many different metrics that could be used to measure generalization performance, and many different types of distribution shift that could be tested. Given the richness of the topic, we decided that we could not do it justice given the time and space constraints of this paper. We are excited to see this issue addressed in future work. > the conclusion in section 5.3 is relevant to interpretation methods such as Kim et al "Interpretability beyond attribution: TCAV", who are also concerned with what kinds of information a neural network extracts at each layer. Thank you for bringing this work to our attention. We would like to emphasize that, unlike Kim et al. (2018), the experiments in Sections 5 and 6 are not concerned with measuring the kinds of information extracted by the neural network at each layer. We are rather interested in the causal contribution of a query concept— in this case, part of speech— to the network’s performance. This is a distinct, albeit related quantity. Since the publication of Kim et al. (2018), several papers have found that the ease of extracting a concept from neural network activations is a poor proxy for the causal contribution of that concept to network behavior; see e.g. “Designing and Interpreting Probes with Control Tasks” by Liang et al. (2019), or “Probing classifiers: Promises, shortcomings, and advances” by Belinkov (2022) for a thorough literature review. We will clarify this issue in the camera-ready with appropriate citations. > it is also not entirely clear how it affects the remaining information. For example, if I look at two identical bios which differ only in the gender of a person, how would their representations compare after you have removed the gender concept with LEACE? We encourage the reviewer to examine Figure 1 in the PDF attached to our general rebuttal, which visualizes the LEACE erasure method on a 2D toy dataset. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I will keep the score unchanged for now.
Summary: Suppose we are given a distribution of data points $(x,z)$, where $x \in \mathbb{R}^d$ and we have one-hot class labels $z \in \mathbb{R}^k$. This paper shows how to construct an affine transformation $\phi(x) = Px + b$ of the data so that * $\phi(x)$ has zero covariance with $z$ * $\\|\phi(x) - x\\|$ is minimized, subject to the above condition The paper argues that this is the optimal affine transformation that ``erases'' the concept $z$ from the representation $x$. Namely, linear regression on $x$ with any convex loss function will be unable to reconstruct $z$ better than random guessing. The paper empirically shows that the proposed method outperforms previous methods (RLACE, INLP) for erasing concepts from a representation, in terms of computational efficiency/collateral damage of the concept erasure. Applications are demonstrated in: 1) fairness, and 2) probing language models to measure how much a certain concept matters to their performance. Strengths: The paper has excellent exposition: the proofs and definitions are clear and simple to follow, and the experiments are generally well explained. The applications to language models are compelling, and the paper demonstrates an improvement over previous approaches. The result is of broad interest to the community. Weaknesses: I am not an expert in the field of concept erasure, so I do not know if this method was previously known. However, my feeling is that the paper proposes a "trivial" solution: orthogonal projection of the random vector $x$ to the subspace of random vectors uncorrelated with $z$. So I find it surprising that this method was not previously known (maybe in a different field and under a different name). Of course, this is not actually a weakness of the paper (but rather of the reviewer). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. The authors state that "prior work has focused solely on preventing linear models from leveraging Z", whereas their work applies also to deep neural networks. How does this claim relate to the cited papers [22, 5, 3, 37], which also intervene on the representations of deep neural networks? 2. Can the authors clarify the paragraph at the top of page 7, entitled "Results"? Why is 77.3% profession-prediction accuracy reported, and then 78.1% accuracy reported? What is the difference between these settings? Typos: * In Definition 2.1, \mathfrak{L} is said to have been defined in Section 2.1, but it is not. * In Section 2.2, the equation between lines 56 and 57 is an argmax over X' but X' does not appear in the expression * Figure 3, "ramdom" -> "random" * Appendix E.1, "a forteriori" --> "a fortiori" Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > my feeling is that the paper proposes a "trivial" solution: orthogonal projection of the random vector x to the subspace of random vectors uncorrelated with z. So I find it surprising that this method was not previously known (maybe in a different field and under a different name). We believe the simplicity of LEACE is one of its major strengths. After a thorough literature review, we were unable to find any prior work proposing an equivalent method, which suggests our approach may only seem “trivial” in hindsight. The closest prior art is the Spectral Attribute RemovaL (SAL) method of Shao et al. (2022), who erase a concept $\mathrm{Z}$ from a representation $\mathrm{X}$ by orthogonally projecting $\mathrm{X}$ onto $\mathrm{colsp}(\Sigma_{\mathrm{XZ}})^\perp$. Like SAL, LEACE also neutralizes the subspace $\mathrm{colsp}(\Sigma_{\mathrm{XZ}})$ with a projection. But unlike SAL, the LEACE projection is oblique (i.e. non-orthogonal) in general. To see the difference, we encourage the reviewer to examine Figure 1 in the PDF attached to our general rebuttal. SAL would orthogonally project the blue and orange points onto the dashed line, which does indeed achieve linear guardedness. But this is not the least-squares optimal solution, since it doesn’t respect the covariance structure of the data. Prior to applying SAL, the data has more variance along the x-axis than along the y-axis, but SAL inverts this structure. By contrast, LEACE preserves the original data distribution as best it can. We note also that our theorems prove the sufficiency of these methods for linear guardedness w.r.t. all convex loss functions (Theorem 3.1). Previously this had only been established for specific loss functions, such as the L2 loss (Ravfogel et al. 2022, “Linear Adversarial Concept Erasure”). > The authors state that "prior work has focused solely on preventing linear models from leveraging Z", whereas their work applies also to deep neural networks. How does this claim relate to the cited papers [22, 5, 3, 37], which also intervene on the representations of deep neural networks? We apologize for the misleading wording— it would be better to say that prior work has focused primarily, but not exclusively, on linear models. We will correct this mistake in the camera-ready version. That said, 3 of the 4 cited papers (Chowdhury et al. 2022, Celikkanat et al. 2020, and Subramanian et al. 2021) apply concept erasure to frozen contextualized embeddings extracted from the final layer of a language model, and not to intermediate activations. We would classify these papers as concerning linear models, since they aim to prevent a linear classifier trained on frozen embeddings from using a concept. The other paper (Lasri et al. 2022) does apply INLP to selected intermediate layers in BERT, but does not attempt to erase a concept from multiple layers simultaneously, as our concept scrubbing method does. We hope this clarifies the novelty of our concept scrubbing method. > Can the authors clarify the paragraph at the top of page 7, entitled "Results"? Why is 77.3% profession-prediction accuracy reported, and then 78.1% accuracy reported? What is the difference between these settings? When a profession predictor is fit on the original unedited training set, it achieves 77.3% accuracy on a LEACE’d validation set— that is, a validation set on which we have applied LEACE to remove the concept of gender. When the predictor is fit on a LEACE’d training set, it achieves 78.1% accuracy on the LEACE’d validation set. The gap between these two figures corresponds to the mild distribution shift created by LEACE. We apologize that this was not clearer in our submission and will rephrase the paragraph for clarity in the camera-ready version. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for the clarifications. I am happy to keep my score of Accept, and would like to congratulate the authors on the excellent work.
Summary: This paper studies concept removal from features. For linear classifiers, the authors prove several equivalent characterizations of linear guardedness (reducing the accuracy of any linear classifiers to the trivial accuracy). In particular, the features that achieve linear guardedness have zero covariance with the labels. Equipped with the new characterizations on the linear guardedness, the authors propose a new formulation of concept removal. Specially, the formulation finds the linear map minimizing the Euclidean distance such that the features after the linear transformation are linearly guarded. The formulation yields a quadratic program with a single linear equality constraint, and thus enjoys a closed-form solution. The author conducts three experiments to verify the effectiveness of the method. The experiments on removing gender information from BERT shows that the proposed method successfully removes gender bias while preserving the classification accuracy. They also apply the proposed method removing part-of-speech tag information from language models to investigate the importance of POS tag on language modeling tasks. Strengths: The idea of minimizing the Euclidean distance seems natural. The characterization on the linear guardedness is sound and useful. Importantly, this paper reveals connections with previously published methods and thus may strengthen the understanding of concept erasure. For example, the theoretical characterization justifies that SAL, Mean Projection and Fair PCA all achieve linear guardedness. Moreover, it is interesting that RLACE has the same top eigenvector with the method proposed in this paper. Weaknesses: 1. The writing overall is good but the structure of writing can be improved. Theorem 4.3 strictly includes Theorem 4.2 as a special case. There is little benefit to separately present Theorem 4.2. Thus, they should be merged. Additionally, I find most of the proofs are not that insightful and thus it's better to defer them to the appendix. The extra space can be used to expand the preliminaries section as I find the current presentation is not entirely clear for audience who's not familiar with the topic. 2. One experiment that I am curious about is the part-of-speech tag prediction accuracy after concept scrubbing. As LEACE only guarantees linear guardedness, it is unclear from Section 5 and 6 whether non-linear classifiers still preserves the concept after concept scrubbing. Minor: - Line 325: lineally -> linearly Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In Figure 1 left, why does random projection have even higher accuracy compared to no intervention? I would expect random projection to be the same or slightly worse than no intervention. Figure 1 right does show they have similar loss. 2. The definition 2.1 might have an issue. The maximization right after Line 56 should be over all possible joint distributions over features and labels that has marginal distribution the same as $Z$. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This authors have listed their limitations in the paper which have been addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback on the presentation of the paper, especially the proofs. We will revise the manuscript based on their feedback for the camera ready version. > One experiment that I am curious about is the part-of-speech tag prediction accuracy after concept scrubbing. As LEACE only guarantees linear guardedness, it is unclear from Section 5 and 6 whether non-linear classifiers still preserves the concept after concept scrubbing. If the class-conditional distributions $\mathcal{P}(\mathrm{X} | \mathrm{Z} = i) : i \in 1 \ldots k$ are all Gaussian with equal covariance matrices, then LEACE will indeed prevent non-linear classifiers from predicting $\mathrm{Z}$ using the scrubbed representation $\mathrm{X}'$. This is because the resulting distributions $\mathcal{P}(\mathrm{X}' | \mathrm{Z} = i) : i \in 1 \ldots k$ will all be equal, and therefore $\mathrm{X}'$ will not have any mutual information with $\mathrm{Z}$. In most cases, however, concept scrubbing will be unable to prevent non-linear classifiers from extracting information about $\mathrm{Z}$ at least to some extent. > In Figure 1 left, why does random projection have even higher accuracy compared to no intervention? We thank the reviewer for bringing this issue to our attention. We found a typographical error in our plotting code, which caused the “No Intervention” accuracy to be reported as 0.892 instead of the correct value of 0.982. The corrected plot is included in the PDF attached to the general rebuttal as Figure 3. > The definition 2.1 might have an issue. The maximization right after Line 56 should be over all possible joint distributions over features and labels that has marginal distribution the same as Z. Although the current text does state X and Z are “jointly defined,” we agree with the reviewer that this should be made more clear. We intend to fix this issue in the camera-ready version. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I suggest the authors incorporate the reviews in the revision. --- Reply to Comment 1.1.1: Comment: Thank you for reading our rebuttal. We would like to draw your attention to the fact that no revisions are allowed until camea-ready stage (https://neurips.cc/Conferences/2023/PaperInformation/NeurIPS-FAQ). We absolutely will (and have already begun to) incorporate the reviews into our draft, and we are excited to share the improved camera ready version with you should our paper be accepted.
Rebuttal 1: Rebuttal: _Please see the attached PDF for figures cited in reviewer-specific rebuttals._ We thank all the reviewers for their helpful feedback. We would like to present two simple extensions of the theoretical results from our original submission, which we believe will make the paper even more compelling and useful for practitioners. **Stronger theoretical result on surgicality** First, we prove a new theorem which implies that LEACE is "surgical" in a much stronger sense than we previously claimed. Specifically, we show that LEACE minimizes the mean squared distance between the original and edited features, not only with respect to the Euclidean norm, but with respect to _all_ norms of the form $\\| \mathbf x \\|_{\mathbf M}^2 = \mathbf x^T \mathbf{Mx}$ for some positive-semidefinite $\mathbf{M}$. The proof is rather simple and intuitive. We begin with the case where $\mathrm X$ is centered and hence the LEACE bias term is zero. Without loss of generality, assume $\mathrm X$ is written in an orthogonal basis that diagonalizes $\mathbf M$. We can then write the reconstruction error as a weighted sum of $d$ terms, each of which only depends on a single row of the erasure matrix $\mathbf P$: 1. $\mathcal L(\mathbf P) = \mathbb E \Big [ \big\\| \mathbf P \mathrm X - \mathrm X \big\\|^2_{\mathbf M} \Big ] = \sum_{i=1}^d w_i \mathbb E \big [ (\mathbf P_i \mathrm X - \mathrm X_i)^2 \big ] = \sum_{i=1}^d w_i \mathcal L_i(\mathbf P_i),\quad \forall i : w_i \ge 0$. The weights $w_i$ in Eq. 1 can be viewed as expressing the "importance" we assign to preventing changes along each component. For example, we might want to weight directions in proportion to their effect on the network's final output. However, we can immediately see that these weights do not affect the set of optimal solutions. This is because Equation 1 implies that $\mathcal L$ is _additively separable_ along the rows of $\mathbf P$, and since our erasure constraint $\mathrm{Cov}(\mathbf P \Sigma_{\mathrm{XZ}}, \mathrm{Z}) = \mathbf 0_{d \times k}$ can also be decomposed into a set of independent constraints $\mathrm{Cov}(\mathbf P_i \Sigma_{\mathrm{XZ}}, \mathrm{Z}) = \mathbf 0_{1 \times k}$ for each row $\mathbf P_i$, we may conclude that $\mathbf P$ is optimal _if and only if_ each of its rows are optimal for their respective subproblems $\mathcal L_i$. But this means that the optimality conditions for $\mathbf P$ are independent of the weights $w_i$, and hence also the choice of norm. Since we have already proven that the LEACE projection matrix is optimal for the Euclidean norm (i.e. $\mathbf M = \mathbf I$), we can conclude that it is optimal for all p.s.d. $\mathbf M$. The extension to uncentered $\mathrm X$ closely mirrors the proof of Theorem 4.3 in our submission. Define $\tilde{\mathrm X} = \mathrm X - \mathbb E[\mathrm X]$ and $\mathbf{c} = \mathbf{P}\mathbb E[\mathrm X] + \mathbf{b} - \mathbb E[\mathrm X]$. Then we have \begin{align*} \mathbb E \big\\|\mathbf{P}\mathrm X + \mathbf{b} - \mathrm X \big\\|^2_\mathbf{M} &= \mathbb E \big\\|(\mathbf{P}\tilde{\mathrm X} - \tilde{\mathrm X}) + \mathbf{c} \big\\|^2_\mathbf{M} \\\\ & = \mathbb E \big\\|\mathbf{P}\tilde{\mathrm X} - \tilde{\mathrm X} \big\\|^2_\mathbf{M} + 2\mathbb E \big[ \mathbf{P}\tilde{\mathrm X} - \tilde{\mathrm X} \big]^T \mathbf{M} \mathbf{c} + \mathbf{c}^T \mathbf{M} \mathbf{c} \\\\ & = \mathbb E \big\\|\mathbf{P}\tilde{\mathrm X} - \tilde{\mathrm X} \big\\|^2_\mathbf{M} + \mathbf{c}^T \mathbf{M} \mathbf{c}, \end{align*} where we have eliminated the middle term because $\mathbf{P}$ is linear and $\mathbb E[\tilde{\mathrm X}] = 0$. Since $\mathbf{M}$ is p.s.d., our objective is minimized for $\mathbf{c} = \mathbf{0}$, i.e. $\mathbf{b} = \mathbb E[\mathrm X] - \mathbf{P}\mathbb E[\mathrm X]$. The problem thus reduces to choosing $\mathbf{P}$ so as to minimize $\mathbb E \big\\|\mathbf{P}\tilde{\mathrm X} - \tilde{\mathrm X} \big\\|^2_\mathbf{M}$ subject to $\mathrm{Cov}(\mathbf{P}\mathrm X + \mathbf{b}, \mathrm Z) = \mathrm{Cov}(\mathbf{P}\tilde{\mathrm X}, \mathrm Z) = \mathbf{0}$, which occurs when $\mathbf{P}$ is the LEACE projection matrix. We are excited about this result because it suggests that LEACE should be useful under a wide variety of assumptions about which parts of a representation are most important. The choice of norm is not an arbitrary free parameter in our method. **More intuitive closed-form formula & visualization** We also would like to present a new, yet _mathematically equivalent_ formula for the LEACE projection matrix which we believe is significantly more intuitive than the one we previously reported: 2. $r_{\mathrm{LEACE}}(\boldsymbol x) = \boldsymbol x - \mathbf{W}^+ \mathrm{Proj}(\mathbf{W}\Sigma_{\mathrm{XZ}}) \mathbf{W}\big (\boldsymbol x - \mathbb E[\mathrm X] \big )$, where $\mathbf{W} = (\Sigma_{\mathrm{XX}})^{-1/2}$ is a whitening matrix and $\mathrm{Proj}(\mathbf{W}\Sigma_{\mathrm{XZ}})$ is the orthogonal projection matrix onto $\mathrm{colsp}(\mathbf{W}\Sigma_{\mathrm{XZ}})$. Intuitively, Equation 2 tells us that LEACE de-means and whitens $\boldsymbol x$, projects onto the subspace responsible for correlations between $\mathrm X$ and $\mathrm Z$, then unwhitens the result. Finally, it subtracts this value from $\boldsymbol x$, thereby surgically removing the linearly available information about $\mathrm Z$. We encourage the reviewers to examine Figure 1 in the attached PDF, which shows this three-step process in action on a toy dataset. We plan to include the full derivation of this new formulation in the appendix of the camera-ready version, where there is sufficient space. We welcome the reviewers to confirm numerically that this formula is indeed equivalent to the one we reported in our original submission, as long as $\Sigma_{\mathrm{XX}}$ is full rank. Pdf: /pdf/184fea6bc3554d0e99a892deb5504319f04af27e.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Risk-Averse Active Sensing for Timely Outcome Prediction under Cost Pressure
Accept (poster)
Summary: This paper studies the problem of balancing timely and accurate outcome predictions with acquisition costs. To this end, a risk averse active sensing approach (RAS) is proposed that determines when to perform feature acquisition as well as which features to acquire. The proposed approach decomposes the policy into an acquisition scheduler that decides when to perform feature acquisition, and a feature selector that decides which features to acquire. A risk-averse training strategy is introduced to focus on high-risk patients. The proposed approach is evaluated on a synthetic dataset and a real-world dataset from Alzheimer's Disease Neuroimaging Initiative. Strengths: + A continuous-time risk-averse active sensing approach is proposed to balance timely and accurate outcome predictions with acquisition costs. + The proposed approach optimizes timely and accurate prediction for tail-risk patients. + Experimental results on a synthetic and a real-world dataset are provided that illustrate the performance of the proposed approach compared to 3 baselines. Weaknesses: - Even though the problem of active sensing over time (also known as active state tracking) is well-studied, the related work section is very thin on references. - Certain parts of the problem and solution description are unclear or left for the appendix. - Certain notation definitions are missing. - Certain decision choices are not justified. - The structure of the proposed solution, as illustrated in Fig. 2, is not justified. - Experiments are provided only on two datasets, out of which 1 is synthetic. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. The problem of active sensing over time (also known as active state tracking) is well-studied, but the related work section does not include any such references. An example set of references are provided below: - Krishnamurthy, V., 2002. Algorithms for optimal scheduling and management of hidden Markov model sensors. IEEE Transactions on Signal Processing, 50(6), pp.1382-1397. - Krishnamurthy, V. and Djonin, D.V., 2007. Structured threshold policies for dynamic sensor scheduling—A partially observed Markov decision process approach. IEEE Transactions on Signal Processing, 55(10), pp.4938-4957. - Krishnamurthy, V., 2013. How to schedule measurements of a noisy Markov chain in decision making?. IEEE Transactions on Information Theory, 59(7), pp.4440-4461. - Zois, D.S. and Mitra, U., 2017. Active State Tracking With Sensing Costs: Analysis of Two-States and Methods for $ n $-States. IEEE Transactions on Signal Processing, 65(11), pp.2828-2843. - Molloy, T.L. and Nair, G.N., 2021, June. Active trajectory estimation for partially observed Markov decision processes via conditional entropy. In 2021 European Control Conference (ECC) (pp. 385-391). IEEE. - Atia, G.K., Veeravalli, V.V. and Fuemmeler, J.A., 2011. Sensor scheduling for energy-efficient target tracking in sensor networks. IEEE Transactions on Signal Processing, 59(10), pp.4923-4937. - Beyer, C., Büttner, M., Unnikrishnan, V., Schleicher, M., Ntoutsi, E. and Spiliopoulou, M., 2020. Active feature acquisition on data streams under feature drift. Annals of Telecommunications, 75, pp.597-611. - Kossen, J., Cangea, C., Vértes, E., Jaegle, A., Patraucean, V., Ktena, I., Tomasev, N. and Belgrave, D., 2022. Active Acquisition for Multimodal Temporal Data: A Challenging Decision-Making Task. arXiv preprint arXiv:2211.05039. A careful literature review and follow-up discussion is needed to place the current paper within the vast prior work on this area and clarify its novelty. 2. The problem studied in this paper also relates to quickest change detection, so it will be informative to include a discussion how the proposed approach contrasts/relates to this line of work. Some example references are provided: - Heydari, J. and Tajer, A., 2017, March. Quickest change detection in structured data with incomplete information. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 6434-6438). IEEE. - Chaudhuri, A., Fellouris, G. and Tajer, A., 2021, July. Sequential change detection of a correlation structure under a sampling constraint. In 2021 IEEE International Symposium on Information Theory (ISIT) (pp. 605-610). IEEE. 3. Certain parts of the problem and solution description are unclear or left for the appendix. For example, the discussion on the training of the predictor, which is necessary to fully understand how the proposed approach works, is left in the appendix. In addition, it should be mentioned early on in Section 2.1 that a subset of features are selected when acquisition happens. 4. Certain notation definitions are missing or are confusing. For instance, D_{JS}(.) in Eq. (1) is not defined (I assume it is the Jensen-Shannon divergence). Furthermore, what is the relationship between T and I? In Eq. (6), m and \Delta have 3 indices, not one as before, why? In Algorithm 1, A & B should be inputs, right? How are they selected in practice? What is their effect on the performance of the algorithm? 5. The structure of the proposed solution, as illustrated in Fig. 2, is not justified. Why this structure is adopted? Why these neural networks? 6. Certain decision choices are not justified. For example, why the beta distribution is adopted for the policy? 7. Experiments are provided only on two datasets, out of which 1 is synthetic. These are not enough to justify the performance of the proposed approach. To make matters worse, three baselines are provided, out of which only 1 is from prior work. The results on real-world data are not impressive, and in fact comparable with the baselines, apart from the feature acquisition reduction. No statistical significance is provided. There is no justification of the synthetic dataset setting. 8. The usage of the terminology "cost pressure on patient trajectory acquisition" is unclear. Also, somewhere in the paper, it is mentioned that there is a budget on acquisition costs but I could not locate where this budget gets into the problem formulation. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors have addressed the limitations of the proposed approach and the potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments and efforts towards improving our manuscript. We provide responses in regard to the reviewer's concerns as follows. ## 1. Clarity - Following the reviewer's advice, we have moved discussions related to our problem formulation and solutions to the main manuscript. - A notation table is available in our response to Reviewer L7Er. Discussion of parameters $A$ and $B$ in Algorithm 1 can be found in Section A.5 of the Appendix. - Beta distribution $\mathrm{Beta}(\alpha, \beta)$ is adopted in our model because i) $\mathrm{Beta}(\alpha, \beta)$ is an uni-modal distribution when $\alpha,\beta>1$; ii) sampling from a Beta distribution is efficient, which helps to reduce the computational complexity. - "Budget" and "cost pressure". Our considered budget is a constraint on the accumulated feature acquisition cost, which creates the "cost pressure" on active sensing policies. For a practical solution to our sensing problem, we relax the budget constraint using a Lagrange approximation (second term of reward signal $r$ in Eq. (2)). ## 2. Related work In the active sensing task, our target is to balance the label prediction accuracy and feature acquisition cost. For this purpose, sensing policies should be able to make decisions on both when to perform new measurements and which feature subset to observe. We thank the reviewer for the detailed list of relevant literature. However, we found that many of these studies are incompatible with our considered active sensing setup. We provide analyses by paper category as follows. - Hidden Markov models (HMM) [1]: There are two possible ways to align the HMM formulation into our active sensing settings. - Consider label $y$ as the state of a hidden Markov chain. The dependency between sensing history and label $y$ cannot be properly captured by the HMM structure, which makes these HMM-based methods unsuitable for our active sensing task. While the approach proposed in [2] can solve the problem of when to observe in active sensing, it is incapable to handle the feature selection task. - Consider label $y$ as the observation of HMM states. In this case, there is no need to perform active sensing and feature selection. - Active learning [3]: This approach focuses on label prediction in static settings and is irrelevant to the setup of our active sensing task. - Modality selection [4]: This is the most relevant paper. However, it only addresses the feature selection task and assumes predetermined time interval between observations. - Fastest change detection [5]: This paper is closely related to optimal stopping which has already been discussed in our manuscript. Further, the correlation structure in sensor networks is unrelated to our active sensing problem. ## 3. Experiments - As requested by the reviewer, we added a new baseline (LL) utilizing the log-likelihood-based reward signal introduced in [4]. The updated benchmark table on the synthetic dataset is provided in the attachment of our global response. - Regarding the performance on real-world datasets, we would like to argue that, one important advantage of our method is the optimization for tail-risk samples. As illustrated in Fig. A.1 of the Appendix, our method (RAS) can effectively improve the sensing performance for the tail-risk patients for whom the expected cost $Q^\pi(\mathbf{X})$ is significantly higher under the FO and Fixed baselines. - We agree with the reviewer that the inclusion of new real-world datasets for evaluation would help to better justify the performance of our method. We will include new evaluations with the MIMIC-III dataset in the final version of our manuscript. ## 4. Justification of the synthetic dataset In typical application scenarios of active sensing, there exist both variables that are highly predictive about the outcome of interest and those that are noisy, less informative proxy variables about the outcome. The acquisition costs of strong predictors are generally higher than the proxies such that balancing between accuracy and observation costs through active sensing is critical. For the synthetic dataset, variable $x_1$ is the strong predictor of outcome $y$, and $x_2$ is the proxy variable with a higher noise level. Thus, the measurement costs for and are set to be 1.0 and 0.1, respectively. The remaining two variables $x_3$ and $x_4$ are irrelevant to outcome $y$. For reference, their acquisition costs are simply set to 1.0. ## Reference [1] Krishnamurthy, V., 2002. Algorithms for optimal scheduling and management of hidden Markov model sensors. IEEE Transactions on Signal Processing, 50(6), pp.1382-1397. [2] Krishnamurthy, V., 2013. How to schedule measurements of a noisy Markov chain in decision making?. IEEE Transactions on Information Theory, 59(7), pp.4440-4461. [3] Beyer, C., Büttner, M., Unnikrishnan, V., Schleicher, M., Ntoutsi, E. and Spiliopoulou, M., 2020. Active feature acquisition on data streams under feature drift. Annals of Telecommunications, 75, pp.597-611. [4] Kossen, J., Cangea, C., Vértes, E., Jaegle, A., Patraucean, V., Ktena, I., Tomasev, N. and Belgrave, D., 2022. Active Acquisition for Multimodal Temporal Data: A Challenging Decision-Making Task. arXiv preprint arXiv:2211.05039. [5] Heydari, J. and Tajer, A., 2017, March. Quickest change detection in structured data with incomplete information. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 6434-6438). IEEE. --- Rebuttal Comment 1.1: Comment: I have read the authors' response as well as the rest of the reviews and I have decided to raise my score accordingly. Still I believe that the evaluation is limited.
Summary: This paper proposed an active sensing method that answers the questions of when and what diagnosis test to conduct to optimize the trade-off between the cost of acquisition and the timeliness and accuracy of the predictive model. Compared with the existing active sensing model that assumes a fixed data collection schedule, it builds flexibility to determine the timing of data collection. The paper demonstrates the utility of the proposed using synthetic data and a real-world dataset. Strengths: The proposed method adds extra flexibility in the decision-making policy to decide when (using continuous time active sensing formulation) to conduct a diagnostic test and what test to complete, which is what the existing model can do. This is a well-motivated problem. Weaknesses: As shown in the attachment, Table A3 on the empirical experiment with a real-world dataset, the proposed method shows only marginal improvement in accuracy and timeliness, but the cost has almost doubled, as compared with the benchmark; not sure whether these results suggest the better quality of the model overall. This raises questions whether there is practical utility of the proposed method. In addition, there are several places the presentation/notation is confusing and hard to follow; see below. There is some notation confusion - for example, in Figure 2, the arrow from the topic box only sources from the post-interperation sensing history, not the post-interpolation of observational patient trajectory, it is unclear why the exclusion of trajectory history is justifiable, or it is an error of the drawing? Figure 1, subplot (a) is extremely small; subplot (b), the vertical dot line is not explained. The equation number and figure number at someplace are not clearly identified, leaving the reader to guess, which cause confusing interpretation, e.g. Page 5 bottom part Technical Quality: 3 good Clarity: 2 fair Questions for Authors: In real-world data set evaluation, it seems to be a three-class classification, while the performance is reported using AUC, etc. which is binary classifier performance metrics, do you weight the multi-class classification in presenting the result? Why it needs to be formulated as classification, is it more relevant if formulating as regression problem instead bin them into discrete categories? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: author could comment on how the bias/quality of the decision in the original dataset may impact the validity of the approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for helping review our paper and providing valuable comments to improve our manuscript. We address the reviewer's concerns on the clarity of our paper and the experimental evaluation of our method as follows. ## 1. Clarity We apologize for the confusions. We have fixed the readability and equation numbering issues mentioned by the reviewer and will further improve the clarity of our manuscript in the final version. ### 1.1 Data flow from observational patient trajectory in Fig. 2 The reviewer is correct. Data flows should come from both the sensing history and observational patient trajectory. One arrow was missed after the box of interpolator in Fig. 2 of our manuscript. We have corrected this mistake and improved the general readability of Fig 2 in the revision. ## 2. Experiments ### 2.1 Benchmark in real-world dataset In our benchmark, both prediction accuracy (ROC, PRC) and acquisition cost (COST) are important performance metrics to take into account. With focuses on different perspectives of a sensing strategy, there could be multiple ways to rank the four methods in Table 2 of our manuscript. As a balanced score function, we propose criterion $\omega$ as the decrease in accuracy per unit reduction in acquisition cost, i.e., $\omega = \frac{\Delta PRC}{\Delta COST}$, where - $\Delta PRC = max(0, PRC_{FO} - PRC)$: drop in accuracy, $PRC_{FO}$ is the PRC score of the FO baseline. - $\Delta COST = COST_{FO} - COST$: reduction in acquisition cost, $COST_{FO}$ is the COST of the FO baseline. In Table A3 of the Appendix, our proposed method RAS achieves nearly no decrease in accuracy while reducing the acquisition cost from 26.4 to 9.89. Based on the criterion $\omega$, RAS has the best performance. In comparison, ASAC would be the best method when our main target is to simply reduce the acquisition cost. In practice, the criterion for ranking different approaches should be carefully designed based on the specific application scenario. Regarding the practical utility of our proposed method, we would like to highlight that, one important advantage of our method is the optimization for tail-risk samples. As illustrated in Fig. A.1 of the Appendix, RAS can effectively improve the sensing performance for the tail-risk patients for whom the expected cost $Q^\pi(\mathbf{X})$ is significantly higher under the FO and Fixed baselines. ### 2.2 AUC score for multi-class classification The reviewer is correct. There are three classes in our considered ADNI dataset. For the accuracy evaluation, we apply the one-vs-the-rest (OvR) strategy to calculate the AUC score for each class and report the average scores across all classes as the summary. ### 2.3 Classification v.s. regression on ADNI On the ADNI dataset, we create patient labels from their cognitive test scores (CDRSB) as the indicator of different stages of the disease. As the reviewer mentioned, the cognitive test scores can be directly used in a regression task. However, we would like to argue that the discrete labels are more appropriate descriptions of the disease progression. According to Table 5 in [1], patients with Dementia conditions may have CDRSB scores in a wide range (4.5 to 18.0). This suggests that the CDRSB scores could be highly noisy and are unsuitable for patient staging on the ADNI dataset. ## 3. Discussion of limitations We appreciate the reviewer for the comments on decision biases. Decisions in the data collection process of a dataset indeed have a potential impact on our proposed approach. First, the validity of our method relies on the baseline predictor $f_P$. If the training dataset is of low quality, $f_P$ is likely to have poor predictive power, which can compromise the usefulness of our proposed sensing policy. In the meantime, highly biased decisions during data collection may lead to high missing rates of certain feature variables, which could mislead our method to overlook the importance of such features. We will include a detailed discussion in the final version of our manuscript. ## Reference [1] O'Bryant SE, Lacritz LH, Hall J, et al. Validation of the new interpretive guidelines for the clinical dementia rating scale sum of boxes score in the national Alzheimer's coordinating center database. Arch Neurol. 2010;67(6):746-749. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification on the practical utility of the proposed approach. If the optimization for tail-risk samples is important, then it is suggested to include in the main part of the paper, rather than buried in appendix. Those discussion needs to be upfront. --- Reply to Comment 1.1.1: Title: Thank you for your valuable comments Comment: Dear reviewer, We sincerely appreciate your valuable comments. We agree that the optimization for tail-risk samples in our proposed approach should be highlighted in our main manuscript. In the final version of our paper, we will include results and discussion on this aspect utilizing the additional one page. Given that we have addressed your primary concerns raised in the review, we kindly ask you to consider adjusting the review score while taking our rebuttal into account. Your comments have greatly helped us improve the clarity of our manuscript, and we are genuinely grateful for your thoughtful suggestions. Thank you once again for your time and input. Best regards, Paper Authors
Summary: This work studies the problem of active cost-aware feature acquisition assuming time varying feature settings via breaking-down feature selection and prediction decision making as two policies. The problem being considered is generally a difficult problem even with the non time-varying feature settings. I have a few questions about the real-world applicability, assumptions, and the approach. I would like to hear from the authors (see below). Strengths: The problem is well motivated for healthcare domain and is of clear practical value. The paper is well-written and clear I found considering risk and long-tail behavior very interesting and quite important as cost optimization often means dropping precision for minorities which is not acceptable for health domain usecases Weaknesses: The method used to optimize the cost of acquisition i.e., a linear coefficient to balance cost vs. prediction risk is quite simple and does not necessarily lead to stable or near-optimal results. In the real-world settings the balance point might be different for each sample, and it is even more complicated when we take the time-varying assumption into account. (see other comments) Technical Quality: 3 good Clarity: 3 good Questions for Authors: I was wondering what was the motivation behind having two policies rather than one with larger action space? Following on the previous question, wouldn’t the two-policy approach result in less optimal solutions compared to the optimization of one joint policy? In the real-world settings that we do not have ground-truth values train set (assuming a cost-aware approach or a version of your method was applied at the time of data collection) how would the training process work? Please correct me but my understanding is that we do need fully observed train set for building the baseline predictor. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I have no particular concern Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's valuable comments and suggestions. We address the questions on our problem formulation and experiments as follows. ## 1. Problem formulation ### 1.1 Motivation of policy decomposition The motivation for our sensing policy decomposition is three folds: - First, the action spaces of the acquisition scheduler and the feature selector are orthogonal and can generate all possible decisions of "what to observe, and when" at the next visit given patient trajectory. - Second, for each feature variable, the following decision cycle happens repeatedly: i) new acquisition of patient covariates is performed based on the latest sensing history; ii) then, the next follow-up time is determined based on the new sensing history including the newly accrued measurements in step i). The policy decomposition of acquisition scheduler $\pi_\Delta$ and feature selector $\pi_m$ is a straightforward reflection of such a practical sensing cycle. - Finally, in most real-world scenarios, multiple selected feature variables are usually measured at the same visit. Thereby, we combine the decisions on individual feature selection at each visit into a binary vector $\mathbf{m}$ and model its distribution with feature selector $\pi_m$. ### 1.2 Optimality of solution As we explained above, the two sub-policies in our formulation have orthogonal action space and are mathematically equivalent to a global policy that determines observation interval $\Delta$ and feature selection mask $\mathbf{m}$ simultaneously. Despite the policy decomposition, the solution of our proposed active sensing problem shall be equivalent to the optimal global policy with a larger action space. ## 2. Method ### 2.1 Balance between the acquisition cost and prediction accuracy We agree with the reviewer that, in some real-world settings, the balance point between the feature acquisition cost and prediction accuracy may vary across different samples, and the linear equilibrium achieved under our objective function may not be sufficient to tackle such a scenario. However, our method can be easily extended to tackle such challenges. For instance, by varying the linear coefficient $\lambda$ in some reasonable range, a list of optimal sensing policies can be obtained with our proposed method. Then, we can construct an ensemble of these policies and fine-tune the weights for different policies on an individual sample basis. This approach should be applicable even when the balance between acquisition cost and prediction accuracy is time-dependent. We leave this as our future work as it is out of the scope of our manuscript. ### 2.2 Training set with missing values The reviewer is correct that a baseline predictor is required to train our proposed sensing approach RAS. Leveraging the power of neural CDE in tackling missing data, the baseline predictor can be built on training data with partial observations. Thus, the training set does not need to be fully observed (e.g., MIMIC-III). Indeed, we note that the effectiveness of RAS is highly correlated with the quality of baseline predictor $f_P$. A training set with high missing rates in feature records will generally lead to poor accuracy of $f_P$, which will unavoidably affect the validity of our sensing policy. However, it is worth highlighting that the same baseline predictor was used for the benchmarks (i.e., the variations of our model). ### 2.3 Online training during data collection Our method is designed to work with batch data in offline settings. It should not be trained online during data collection time. --- Rebuttal Comment 1.1: Title: Re: Rebuttal by Authors Comment: I have read the authors feedback and updated my evaluation accordingly. I appreciate the response to my questions but I still have concerns the practical application of this method. --- Reply to Comment 1.1.1: Title: We appreciate your insightful comments Comment: Dear reviewer, We appreciate your insightful comments. We would like to further clarify on the practical application of our method. First, we concur with the reviewer's observation that our approach, which maintains a linear trade-off between diagnostic accuracy and observation costs, has some limitations. In complex settings where a personalized trade-off coefficient is imperative, our method may not generate the optimal solution. Nevertheless, we contend that the utilization of a similar linear scalarization technique has found extensive application in solving practical multi-objective optimization problems [1]. Within the specific scenarios outlined in our manuscript, our method adeptly attains both stability and optimality through the linear trade-off approach. We acknowledge that extending our method to accommodate tasks necessitating an individualized trade-off is an important area for future exploration. Second, regarding the practical application of our method, we have consulted with several clinical collaborators. They consider our objective function (as a weighted sum of accuracy and costs) to be a reasonable design choice for real-world applications. Furthermore, our clinical collaborators have identified three distinct practical scenarios wherein our method is exceptionally well-suited: 1. Active surveillance for prostate cancer [2]. Prostate cancer is a common disease for male patients and overdiagnosis is a problem. Most individuals newly diagnosed with low-grade prostate cancer would not require immediate treatment (radiotherapy, surgery etc.) which has a great impact on their life quality. Active surveillance is the only approach to pinpoint patients with higher risks of death due to prostate cancer. 2. Lung nodule management in lung cancer screening. The process of lung cancer screening frequently detects lung nodules, the malignancy of which remains uncertain. Empirical risk models alongside repeated CT/PET scans are widely adopted to monitor the disease progression. However, definitive diagnosis often necessitates invasive procedures like biopsy or surgical resection, which pose risks to patients. In this context, the consideration of additional diagnostic accuracy of repeated scanning weighed against cost (including radiation exposure) in active sensing is valuable. 3. The ongoing management of cancer recurrence. Cancer recurrence may occur weeks or even years following the initial treatment. Early detection and intervention of the recurrence mandates vigilant monitoring. However, the predictive variables involved—biomarkers, imaging, biopsies—carry concomitant costs, radiation exposure, and potential adverse consequences for patients. Notably, radiation exposure is established as a significant risk factor for second, independent cancers among those treated for cancer [3]. Here, active sensing is essential in minimizing the adverse impact on patients during the management of cancer recurrence. These clinical scenarios underscore the real-world relevance and practical impact of our work. We trust that these examples more effectively highlight the significance of our approach. We sincerely hope that this response adequately addresses your concerns regarding the practical applications of our method. Once again, we extend our gratitude for your time and valuable comments. Best regards, Paper Authors [1] Eriskin, Levent, Mumtaz Karatas, and Yu-Jun Zheng. "A robust multi-objective model for healthcare resource management and location planning during pandemics." Annals of Operations Research (2022): 1-48. [2] https://www.nice.org.uk/guidance/ng131/chapter/Recommendations. [3] Demoor-Goldschmidt, Charlotte, and Florent de Vathaire. "Review of risk factors of secondary cancers among cancer survivors." The British journal of radiology 92.1093 (2019): 20180390.
Summary: This paper investigates timely outcome prediction by proposing a novel risk-averse active sensing approach RAS. The proposed RAS decomposes the policy into acquisition scheduler and feature selector to address the composite decision problem of when to conduct the acquisition and which measurements to make. In addition, RAS enables the prioritization of tail-risk patients in the risk-aversion training procedure. The experiments on synthetic and real-world healthcare datasets show its effectiveness. Strengths: This paper is studying a very interesting and important problem - active sensing methods for early detection and intervention of adverse events. The proposed risk-averse active sensing method is technically sound. The experimental results show its effectiveness compared to the three baselines. Weaknesses: Minor comments: 1.there are so many symbols used in the paper writing, it would be helpful to include a notation table in the appendix. 2. The titles of Section 2 and Section 3 are confusing. Probably, Section 2 is Problem Formulation (or General Framework), Section 3 is Methodology. 3. Highlight the best performance in Table 2. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: The result shown in Figure 3 shows that both NRAS and RAS can reduce the long tail in the distribution. It seems NRAS is better than RAS for reducing the long tail, right? Why does this happen? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors have discussed broader impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments. The reviewer’s comments regarding clarity and experiment results are addressed below. ### 1. Notation table. We thank the reviewer for the suggestion of including a notation table to improve clarity. The definition and explanation of major notations used in our manuscript are listed in Table R1. Table R1. Major notations in the manuscript. | Notation | Definition | Notation | Definition | | ---------- | ------------ | -------------------- | ---------------------- | | $\mathbf{x}$ | Patient feature variable | $\mathbf{X}$ | Patient trajectory | | $y$ | Patient outcome | $\mathbf{\hat{X}}^\pi$ | Sensing history under policy $\pi$ | | $\pi$ | Sensing policy | $\pi_\Delta$ | Acquisition scheduler | | $\pi_m$ | Feature selector | $\mathbf{m}$ | Selection mask | | $D_{JS}$ | Jensen–Shannon divergence | $\mathbf{c}$ | Cost vector | | $r_y$ | Cumulative mismatch in label prediction | $r_m$ | Cost of feature acquisition | | $r$ | Reward signal | $Q^\pi(\mathbf{X})$ | Expected cost | | $CVaR$ | Conditional value-at-risk | $\alpha$ | Tail-risk quantile | | $T$ | End time of observation | $I$ | Number of observations in sensing history | | $N$ | Number of samples | $\mathcal{I}$ | Interpolator | We will include a more detailed notation table in the final version of our paper. ### 2. Titles of Section 2 and 3 We apologize for the confusion caused by the similar titles used for Section 2 and 3 in our manuscript. We will modify the title of Section 2 to "Problem Formulation," as recommended by the reviewer. ### 3. Highlight the best performance in Table 2. In our benchmark, both prediction accuracy (AROC, PRC) and acquisition cost (COST) are important performance metrics considering their clinical impacts. Focusing on various perspectives of a sensing approach, there could be multiple ways to rank the four methods in Table 2. Here, we propose one possible criterion, denoted as $\omega$, which measures the decrease in accuracy per unit reduction in acquisition cost, i.e., $\omega = \frac{\Delta PRC}{\Delta COST}$, where - $\Delta PRC = max(0, PRC_{FO} - PRC)$: drop in accuracy, $PRC_{FO}$ is the PRC score of the FO baseline. - $\Delta COST = COST_{FO} - COST$: reduction in acquisition cost, $COST_{FO}$ is the COST of the FO baseline. Note that our method, RAS, achieves no loss in accuracy while reducing the acquisition cost from 39.6 to 4.535. Based on the criterion $\omega$, RAS provides the best performance which can be highlighted in Table R2 below: Table R2: Benchmark of active sensing performance. | METHOD | ROC | PRC | COST | $d_{δ=0.3}$ | $d_{δ=0.5}$ | $d_{δ=0.7}$ | | ------ | ---- | --- | --- | ---- | --- | --- | | FO | 0.668±0.000 | 0.634±0.000 | 39.600±0.000 | 0.582±0.000 | 0.229±0.000 | 0.181±0.000 | | ASAC | 0.582±0.035 | 0.527±0.023 | 9.189±1.895 | 1.052±0.339 | 1.326±0.063 | 1.323±0.065 | | FIXED | 0.655±0.006 | 0.600±0.005 | 0.907±0.034 | 1.384±0.000 | 1.398±0.000 | 1.359±0.000 | | **RAS** | 0.678±0.006 | 0.635±0.005 | 4.535±0.088 | 0.142±0.018 | 0.132±0.028 | 0.154±0.032 | ### 4. Comparison of total acquisition cost distribution in Figure 3. We appreciate the reviewer for pointing this out. As mentioned by the reviewer, both RAS and its non-risk-averse version (NRAS) can mitigate the long tail in the distribution of the expected cost function $Q^\pi(\mathbf{X})$ compared to the baseline policy with a constant decision interval. Due to the inherent randomness of the stochastic policies, in a single evaluation, the NRAS baseline might have appeared to outperform RAS at some outliers as observed in Figure 3. For a more clear comparison between RAS and NRAS, we have reconducted the experiment in Figure 3 with the average cost $Q^\pi(\mathbf{X})$ of each sample averaged over 5 random training/testing splits. The new result clearly demonstrates that our method RAS can further reduce the long tail in $Q^\pi(\mathbf{X})$ distribution compared to NRAS, highlighting the importance of our proposed tail-risk minimization strategy. Please check the PDF attachment for the updated figure. --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: Thanks for the response. I have read it. The new experimental result is quite different with the result shown in the submitted paper. This makes me concern about the reliability of the proposed method and experiments in this paper. --- Reply to Comment 1.1.1: Title: We apologize for any lack of clarity in describing the new results in our rebuttal Comment: Dear Reviewer, We apologize for the confusion regarding the new results we posted in our rebuttal. We would like to provide further clarification on the changes. ### 1. Updated Table 1 in the attachment As requested by Reviewer 1dSC, we have included a new baseline (LL) in the benchmark. All other results in Table 1 remain unchanged. Our analysis remains effective with the addition of the new baseline. ### 2. New results in Figure 3 We believe there may have been a misunderstanding by the reviewer due to our unclear description in the rebuttal. The new results are still consistent with our original Figure 3. The differences in the results are due to slight changes in the setup, which were necessary given the limited time available for the rebuttal period. - Since our primary focus is to highlight the effectiveness of RAS in reducing long-tail distribution in $Q^\pi(\mathbf{X})$, we chose to conduct the experiment for Figure 3 with a smaller number of training epochs to facilitate timely evaluation. As illustrated in the updated Figure 3 in the attached document, RAS effectively reduces the number of tail samples compared to NRAS, even with fewer training epochs in the new setup. - It is important to note that the convergence speed of NRAS and RAS differs due to the reduced number of effective samples (only tail-risk samples) for RAS in each epoch. The reduction in the number of training epochs unavoidably affected the shape of $Q^\pi(\mathbf{X})$ distributions in the new results. We apologize for any confusion and assure you that a sufficient number of training epochs will be used in the final version of our manuscript. - In the new results, we plotted all samples in the synthetic dataset to emphasize the difference between RAS and NRAS in the long tail. In contrast, only test set samples were used in the original Figure 3. This led to changes in the y-axis scales and potentially impacted the shape of the $Q^\pi(\mathbf{X})$ distribution as well. - The Fixed baseline was not included in the new Figure 3, which resulted in a different scale on the x-axis. Once again, we extend our gratitude to the reviewer for the thorough evaluation of our new results. We apologize for any lack of clarity in describing the new setup in our rebuttal. We hope that our clarifications can address the reviewer's concerns regarding the new results and reliability of our method and lead to a reconsideration of the reviewer's score in our rebuttal. Best regards, Paper Authors
Rebuttal 1: Rebuttal: We would like to thank all reviewers for taking the necessary time and effort to review our manuscript. We sincerely appreciate all your valuable comments and suggestions, which helped us in improving the quality of the manuscript. ## Summary of related work mentioned by reviewer d1SC We thank reviewer d1SC for providing an extensive list of pertinent literature. We put a summarized table of these papers in Table R1 below for reference. Table R1. Comparison with additional literature. | Method | Focus | Observation | Time interval | Action | | ------ | ------- | --------- | ----------- | -------------- | | RAS (Ours) | Label $y$ | Sensing history $\tilde{\mathbf{X}}$ | Adaptive (continuous) | Determine when and what to observe | | [1,2,4] | HMM state $e$ | Discrete measurement of $e$ | Constant | Select most relevant sensor to measure $e$ | | [3] | HMM state $e$ | Scalar measurement of $e$ | Adaptive (discrete) | Schedule measurement to $e$ | | [5] | HMM state $e$ | Observation of $e$ | Constant | Track the hidden state $e$ | | [6] | HMM state $e$ | Location of $e$ | Constant | Schedule sensors to locate a target object | | [7] | Model accuracy | Sample stream | Constant | Select sample and features to improve training set | | [8] | Label $y$ | multi-modal data $\mathbf{\tilde{x}}$ | Constant | Query most relevant modality data source for prediction | | [9,10] | Correlation structure | Node states | Constant | Detect the change of correlation structure in sensor network | ## Reference [1] Krishnamurthy, V., 2002. Algorithms for optimal scheduling and management of hidden Markov model sensors. IEEE Transactions on Signal Processing, 50(6), pp.1382-1397. [2] Krishnamurthy, V. and Djonin, D.V., 2007. Structured threshold policies for dynamic sensor scheduling—A partially observed Markov decision process approach. IEEE Transactions on Signal Processing, 55(10), pp.4938-4957. [3] Krishnamurthy, V., 2013. How to schedule measurements of a noisy Markov chain in decision making?. IEEE Transactions on Information Theory, 59(7), pp.4440-4461. [4] Zois, D.S. and Mitra, U., 2017. Active State Tracking With Sensing Costs: Analysis of Two-States and Methods for n-States. IEEE Transactions on Signal Processing, 65(11), pp.2828-2843. [5] Molloy, T.L. and Nair, G.N., 2021, June. Active trajectory estimation for partially observed Markov decision processes via conditional entropy. In 2021 European Control Conference (ECC) (pp. 385-391). IEEE. [6] Atia, G.K., Veeravalli, V.V. and Fuemmeler, J.A., 2011. Sensor scheduling for energy-efficient target tracking in sensor networks. IEEE Transactions on Signal Processing, 59(10), pp.4923-4937. [7] Beyer, C., Büttner, M., Unnikrishnan, V., Schleicher, M., Ntoutsi, E. and Spiliopoulou, M., 2020. Active feature acquisition on data streams under feature drift. Annals of Telecommunications, 75, pp.597-611. [8] Kossen, J., Cangea, C., Vértes, E., Jaegle, A., Patraucean, V., Ktena, I., Tomasev, N. and Belgrave, D., 2022. Active Acquisition for Multimodal Temporal Data: A Challenging Decision-Making Task. arXiv preprint arXiv:2211.05039. [9] Heydari, J. and Tajer, A., 2017, March. Quickest change detection in structured data with incomplete information. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 6434-6438). IEEE. [10] Chaudhuri, A., Fellouris, G. and Tajer, A., 2021, July. Sequential change detection of a correlation structure under a sampling constraint. In 2021 IEEE International Symposium on Information Theory (ISIT) (pp. 605-610). IEEE. Pdf: /pdf/52aab6c3ea91b01961a19b4382d2a6b41e4f1d8c.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Discovering Intrinsic Spatial-Temporal Logic Rules to Explain Human Actions
Accept (poster)
Summary: This paper proposes a model for inferring rules based on observation trajectories of (entity, time, location), with the aim of maximizing the probabilities of certain "events". The aim is to predict the next events based on past trajectories, conditioned on the latent "rules" which need to be marginalized over. But this is intractable, so an "encoder" or "recognition" network is used, similar to the logic of variational autoencoder. In the encoder, a transformer is used to define the probability distribution over possible rules based on input trajectories, and a decoder can sample these rules to generate the next event. The model is used for trajectory prediction tasks on two datasets with better results than existing trajectory prediction model. Examples of rules learnt are also given in case of one dataset. Strengths: 1) A nice framework is proposed for representation of spatio-temporal action rules based on logic 2) The "rules" are expressed as a latent variable model, where inference can be done using E-M algorithm applied to an encoder-decoder model 3) Strong results are shown on trajectory prediction tasks, sample rules are also shown In my opinion, the rule definition and rule inference framework is quite general and can find wider applications than the experiments described in the paper. Weaknesses: It is not very clear how exactly the goal states are estimated (eg. in Fig 3), or how the future trajectories are generated (Fig 3 and experiments), I get the broad idea, but precision is missing. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1) Can we have an experiment on synthetic setting with ground truth data generated according to specific rules (eg. like a cellular automata), and then show if the proposed model can recover the rules? 2) Please add an algorithm to explain how the trajectory prediction is carried out, step-by-step. 3) What can be other applications of this framework beyond trajectory prediction? Also, can the "rules" here be related to "policies" to choose actions as in Reinforcement Learning? 4) Can the rule probability distribution be modelled by a cheaper model than Transformer? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your recognition of our work and your insightful reviews! We hope our responses can address your questions. Our responses are listed below. **Q1: It is not very clear how exactly the goal states are estimated (eg. in Fig 3), or how the future trajectories are generated.** A1: Following [1][2], the goal and waypoint heatmap are extracted from the decoder in our framework. We estimate distributions of future waypoint positions which along with the goal points are used to obtain explicit maps over all the remaining intermediate trajectory positions. [1] Mangalam K, An Y, Girase H, et al. From goals, waypoints \& paths to long-term human trajectory forecasting[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 15233-15242. [2] Jacobs H O, Hughes O K, Johnson-Roberson M, et al. Real-time certified probabilistic pedestrian forecasting[J]. IEEE Robotics and Automation Letters, 2017, 2(4): 2064-2071. **Q2: Can we have an experiment on the synthetic setting with ground truth data generated according to specific rules (eg. cellular automata), and then show if the proposed model can recover the rules?** A2: Yes. Thanks for your advice. We follow [3] to verify our model’s rule discovery ability on synthetic datasets with a known set of ground-truth rules and weights. Note that it was originally utilized for the temporal point process, so we modify it by adding spatial variables (such as “left, right, front, and behind”) to fit in our settings. The weight learning results on 4 synthetic datasets are shown in the following table. More results are shown **in the attached PDF Figure 2.** | Weight | Dataset-1 | Dataset-2 | Dataset-3 |Dataset-4| |--|--|--|--|--| | w0 | 1.0/0.98 | 0.5/0.45 |1.5/1.47|2.0/1.82| | w1 | 1.0/0.91 | 0.5/0.40 |1.5/1.44|1.0/0.97| | w2 | 1.0/0.81 | 0.5/0.34 |1.5/1.39|1.0/0.92| Table: Rule discovery and weight learning results (GT weights/learned weights) on 4 synthetic datasets. [3] Li S, Feng M, Wang L, et al. Explaining point processes by learning interpretable temporal logic rules[C]//International Conference on Learning Representations. 2021. **Q3: Please add an algorithm to explain how the trajectory prediction is carried out, step-by-step.** A3: Following the reviewer's advice, we explain the trajectory prediction step by step. --- **Input**: observed history trajectory $H$ of entity $v$, time horizon $T$, the number of rules $N$ **while** not converge **do** $\qquad$ Use the rule generator $p_\theta$ to generate a set of rules $z$ by $p_\theta(z | v, H_t) = Ψ(z|N, Trans_\theta(v, H_t))$; $\qquad$ Update the predictor $p_w$ based on generated rules $z$; $\qquad$ Identify K high-quality rules from $z$ according to Eq.10; $\qquad$ Update the rule generator $p_\theta$ according to the identified rules; **end** Use $p_\theta$ to generate rules and feed them into $p_w$ for prediction; **for** $i = 1,\dots,T$ **do** $\qquad$ Calculate posterior probability $P(z_i|CurrentPosition)$; **end** Update parameters based on computed posterior probabilities; **for $i = 1,\dots,T$ do** $\qquad$ Compute the weighted average of predicted positions from each component; **end** --- **Q4: What can be other applications of this framework beyond trajectory prediction? Also, can the "rules" here be related to "policies" to choose actions as in Reinforcement Learning?** A4: (1) This framework can also be applied to other spatial-temporal events, such as social analysis, mobile robots and epidemic forecasting. (2) Yes, the generated rule can be utilized to choose actions in Reinforcement Learning. It's our future work to represent the policies in reinforcement learning by first-order logic based on policy gradient methods and differentiable inductive logic programming, which has significant advantages in terms of interpretability and generalisability. **Q5: Can the rule probability distribution be modeled by a cheaper model than Transformer?** A5: Yes, the rule probability distribution be modeled by other backbones, such as CNN. In the supplementary material Section 7, we have compared our method (transformer-based) with three widely used backbones, including CNN, RNN, and GNN (graph neural network), and evaluated them in the NBA dataset. The results have been shown **in supplementary material Table 3**. Obviously, our architecture can actually achieve superior results in all metrics. Below we show the results for the reviewer’s convenience. | Method | 1.0s | 2.0s | 3.0s | 4.0s| |--|--|--|--|--| | CNN | 0.41/0.60 | 0.78/1.07 |0.98/1.53|1.24/1.76| | GNN | 0.38/0.59 | 0.76/1.03 |0.96/1.49|1.19/1.69| | RNN | 0.36/0.49 | 0.69/1.00 |0.95/1.39|1.17/1.67| | Ours |0.30/0.40 | 0.58/0.88 |0.87/1.31|1.13/1.60| Table: Comparison of different backbones in the rule generator in the SDD dataset. --- Rebuttal Comment 1.1: Comment: I thank the authors for the responses. I am quite satisfied and have no further questions. I will be happy if these are added to the final version of the paper if accepted.
Summary: This paper presents a novel approach for human trajectory prediction, introducing a learnable rule-based framework that combines rule generation/reasoning and EM optimization. Unlike previous works in the field, this framework utilizes a neural rule generator to generate rules and treats them as latent variables. Additionally, it selects the top k rules from the entire rule set to predict future trajectories. The experimental results on two datasets demonstrate the effectiveness of the proposed framework. Strengths: - The paper is well-structured and written in a clear manner, making it easy to follow. The authors have used appropriate notation and included equations where necessary, enhancing the readability of the paper. - The detailed analysis and visualizations presented in sections 5.4-5.6 and the ones in the supplementary materials contribute to a better understanding of the proposed method. I found them enjoyable and informative to read. - The paper introduces a novel framework that incorporates separate steps for rule generation and logic reasoning. This approach is original and reasonable for the given task. - The proposed framework demonstrates improved results on the Stanford Drone Dataset and NBA SportVU dataset. - The authors have put effort into providing numerous illustrative examples that showcase the behavior of the proposed framework. The explanation of logic rules in the NBA dataset (Figure 5) is particularly interesting. Weaknesses: - The predicates are required to be manually defined and these predicates may need to be redefined when applied to new scenarios, such as transitioning from 2D to 3D scenarios. - ETH/UCY benchmarks have been widely used for benchmarking short-term trajectory prediction but missing from this paper. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Regarding the latent rule space, what is the dimension of the latent space z? - I have reservations about fully accepting the argument presented in L207 that "our method can discover more principles-like complex rules." The generation of more rules for complex conditions necessitates a larger latent space (higher dimension), which can pose challenges in generating high-quality rules. - In L207-L208, it is mentioned that "For each query, we aim to identify top K rules $z_I$ from all generated rules $\hat{z}$," with K set to 5 in the experiments. Not all movements require up to K rules. Some movements may be simple and only require 1-2 rules (as seen in the second example in Figure 2). Is it possible to automatically determine the number of rules based on their weights? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors have provided a discussion of the limitations in Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your recognition of our work and your insightful reviews! We hope our responses can address your questions. Our responses are listed below. **Q1: The predicates are required to be manually defined and these predicates may need to be redefined when applied to new scenarios, such as transitioning from 2D to 3D scenarios.** A1: Thanks for your advice. To demonstrate the benefit of our method in 3D scenarios, we also evaluate it in the 3D motion prediction dataset, called Haggling dataset [1], over 7 different action types, and the results are shown in Table 1. Our method achieves promising results for both short-term and long-term predictions of complex activities. |Time|80ms|160ms|320ms|400ms|560ms|640ms|720ms|1000ms| |--|--|--|--|--|--|--|--|--| |Walking|0.38|0.58|0.80|0.89|0.98|1.03|1.11|1.22| |Eating|0.25|0.39|0.60|0.76|0.94|1.01|1.03|1.29| |Discussion|0.31|0.57|0.88|0.99|1.48|1.65|1.81|1.96| |Phoning|0.55|0.83|1.22|1.35|1.58|1.65|1.72|1.92| |Posing|0.27|0.56|1.19|1.48|1.93|2.14|2.29|2.58| |Sitting|0.40|0.63|1.02|1.18|1.28|1.34|1.40|2.02| |Waiting|0.36|0.69|1.25|1.46|1.80|1.95|2.12|2.57| Table: Performance evaluation (in MAE) of comparison methods over on the Haggling (H3.6) dataset. [1] Liu Z, Wu S, Jin S, et al. Towards natural and accurate future motion prediction of humans and animals[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 10004-10012. **Q2: ETH/UCY benchmarks have been widely used for benchmarking short-term trajectory prediction but missing from this paper.** A2: We have evaluated our method in the ETH/UCY dataset, and **due to the page limitation, we put the result in the supplementary material Table 6**. Below we show the results for the reviewer’s convenience. These new experimental results show that we still achieve superior results in most metrics. | Methods | ETH | HOTEL | UNIV | ZARA1 | ZARA2 |AVG| |--|--|--|--|--|--|--| |Y-Net |0.28/0.33|0.10/0.14|0.24/0.41|0.17/0.27|0.13/0.22|0.18/0.27| |MID |0.39/0.66|0.13/0.22|0.22/0.45|0.17/0.30|0.13/0.27|0.21/0.38| |NSP-SFM |0.25/0.44|0.09/0.13|0.21/0.38|0.16/0.27|0.12/0.20|0.17/0.24| |Social SSL |0.69/1.37|0.24/0.44|0.51/0.93|0.42/0.84|0.34/0.67|0.44/0.85| |Social Implicit|0.66/1.44|0.20/0.36|0.32/0.60|0.25/0.50|0.22/0.43|0.33/0.37| |Social-VAE |0.41/0.58|0.13/0.19|0.21/0.36|0.17/0.29|0.13/0.22|0.21/0.33| |ABC+ |0.31/0.44|0.16/0.21|0.25/0.47|0.21/0.28|0.20/0.26|0.23/0.32| |Ours |**0.22/0.30**|**0.07/0.13**|**0.16/0.34**|**0.14/0.25**|**0.07/0.16**|**0.13/0.24**| Table: Quantitative results (ADE20/F DE20) of trajectory prediction in ETH/UCY dataset. The bold font represents the best result. **Q3: Regarding the latent rule space, what is the dimension of the latent space z?** A3: The dimension of latent embedding in our method is set to 32 on all datasets. We will add it in a revision. **Q4:I have reservations about fully accepting the argument presented in L207.** A4: Thanks for your advice. Complex rule generation actually requires higher-dimensional latent space, which is also our future work. We will modify this sentence as “our method has the potential to discover more principles-like complex rules”. **Q5: Is it possible to automatically determine the number of rules based on their weights?** A5: Yes, the number of rules can also be automatically learned by neural networks or other deep learning methods. In our settings, we manually set K as the upper limit to reduce the computation cost. In E-step, some movements are so simple that they only require less rules because most of the candidate rules’ weights are minimal. Moreover, following the reviewer’s advice, we set a weight threshold to automatically determine the number of rules, and the results are shown in the following table. Obviously, this operation can bring slight improvements. | Times | Ours | Ours++ | |--|--|--| | 1.0s | 0.30/0.40 | 0.29/0.38 | | 2.0s | 0.58/0.88 | 0.57/0.86 | | 3.0s | 0.87/1.31 | 0.85/1.29 | | 4.0s | 1.13/1.60 | 1.11/1.55 | Table: Quantitative results of trajectory prediction in NBA dataset. “Ours++” means that we automatically determine the number of rules. --- Rebuttal Comment 1.1: Comment: Thank you for your response. The additional details you provided have effectively addressed my concerns. Their inclusion will undoubtedly enhance the quality of the paper. I'm therefore pleased to recommend acceptance.
Summary: The paper proses a method for learning spatio-temporal logic rules to explain human actions, utilizing the EM framework. The method results are more easy interpretable by humans, thanks to the logic rules. The method beats some state-of-the-art methods on two real-world motion prediction datasets. Strengths: The method is well-motived, i.e. that human actions are driven by intention as well as social and environmental factors. The explainability of the model is impressive and well-visualized. Weaknesses: My major concern with this work is the need for dataset-depended actions. Biasing the model with additional labels, which are not provided to other methods, allows it to spend more capacity on forecasting, effectively producing an unfair advantage. It would have been better if the authors provide an approach that can automatically extract the actions, without human pre-selection. While the general intention is well-motivated, the method overview and method motivation in the introduction section is not well described, making it difficult to get a first good understanding of what the method tries to achieve and how this is done. For example, the authors explain that “[] logic rules present a compact knowledge representation” (L19) but they do not further explain what “logic” they are using and how it is defined. In the method section a lot of important details are missing, making it difficult to follow: * In the Equations in L104.5 and L113.5 the authors utilize variable v and v’ without introducing what this variable means. The authors should explain the variable appropriately. * In L122 the authors introduce various new object types “person”, “block” and “key” which are not explained. * In L117 the paper says that the binary predicated values can be softened into “probabilistic” ones using kernel functions without details or reference to other parts of the paper. This makes it particularly difficult to understand as the paper has not yet revealed how the logic rules are applied to obtain the predictions. Spatio-temporal predicates (L104) contain the time and space dimension. This necessitates that both time and area must be discretized. The discretization rate would be an important hyper-parameter and should be ablated accordingly. Why do the authors not follow the experimental setup in [23] and also evaluate on the ETH-UCY dataset? Can the dataset not be labeled with actions? If so: why? What makes the dataset difficult for this method setup? The organization of the experiments section is not convincing: while the authors conducted a lot of experiments they should have added them into the main paper. **Minor issues**: The authors claim their models “superior interpretability” (L15) but do not properly explained in text (only attempted in Figure 5). This however should be explained briefly. Equations should be numbered to make referencing them easier Technical Quality: 2 fair Clarity: 3 good Questions for Authors: It seems that using the full 3D human pose would be a much better indicator for human actions, interactions with objects, and interactions with other persons. Why did the authors use only trajectory data? For example, the Haggling dataset [a] contains triadic interactions under a well-defined social protocol which could have been utilized. L94: “Consider a set of objects denoted as C”: does object mean humans? This should be explicitly stated if true. L96: could the authors elaborate how k is encoded? — [a] Liu, Zhenguang, et al. "Towards natural and accurate future motion prediction of humans and animals.” CVPR 2019. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The methods challenges are discussed to some degree. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your insightful reviews! We hope our response below addresses your concerns. **Q1: My major concern with this work is the need for dataset-depended actions** A1: Thanks for your suggestion. To learn specific actions would require learning recursion and predicate invention [a]. Invented predicates can be interpreted as a set of phrases to express the meaning of actions. Following [a], we embed meta-interpretive learning (MIL), which supports efficient predicate invention and learning of recursive logic programs built as a set of metalogical substitutions by a modified Prolog meta-interpreter, into our framework and evaluate its performance in the NBA dataset. Obviously, this operation actually brings slight improvements. | Times | Ours | Ours+ | |--|--|--| | 1.0s | 0.30/0.40 | 0.28/0.37 | | 2.0s | 0.58/0.88 | 0.58/0.87 | | 3.0s | 0.87/1.31 | 0.85/1.28 | | 4.0s | 1.13/1.60 | 1.11/1.54 | Table: Quantitative results (ADE/FDE) of trajectory prediction in NBA dataset. “Ours+” means that we automatically extract the actions. [a] Muggleton S H. Meta-interpretive learning of higher-order dyadic datalog: Predicate invention revisited[J]. Machine Learning, 2015. **Q2: They do not further explain what “logic” they are using and how it is defined.** A2: Indeed, “logic” emphasizes high-level reasoning, and encourages structuring the world in terms of objects, properties, and relations [b]. “logic” can provide the formal machinery to reason about some concepts, such as time, space, abstraction, and causality, in a rigorous way. [b] Belle V. Symbolic logic meets machine learning: A brief survey in infinite domains[C]//International conference on scalable uncertainty management. 2020. **Q3: In the method section a lot of details are missing ...** A3: (1) v and v’ represent the entity-time-location triplet (described in Line 125), so X(v) and R(v,v’) are logic random variables that define the property or relation of entities. (2) “person”, “block” and “key” are the specific instances in the object set C, so that in Line 121, this rule represents that a person wants to pick up a key, while one block is in front of him and the key is behind him, so he turns around. We utilize this example to manifest the meaning of logic rules. (3) Following [c], we use the softened representation and a weighted combination of logic rules to deal with uncertainty in events. The soft constraint has been utilized in other papers (Ref [10] [24] in the main paper). [c] Li S, et al. Temporal logic point processes[C]// ICML, 2020. **Q4: The discretization rate would be an important hyper-parameter and should be ablated accordingly.** A4: The discretization rate can directly influence the computation cost and the loss of information. The larger discretization rate means more loss of information but less computation cost. In fact, our method can tackle different discretization rates properly. In the main paper, we set the smallest discretization rate in our experiment. Following the reviewer’s advice, we also set different discretization rates (presented as d, and "d=0.1" means that we remove 10% trajectory data regularly) in the SDD dataset and obtain some results in the following table. | Metric | d=0.1 | d=0.2 | d=0.3 | d=0.4 | |--|--|--|--|--| |ADE | 8.24 | 12.21 | 15.03 | 19.49 | |FDE | 16.74|19.58 | 24.98 | 30.87 | Table: Ablation study of discretization rate in SDD dataset. **Q5: Why do the authors not follow the experimental setup in [23] and also evaluate the ETH-UCY dataset?** A5: In Line 235, we have demonstrated that “We follow the [23] standard train-test split, and predict the future 4.8s”. Moreover, we have evaluated our method in the ETH-UCY dataset and the results are shown **in the supplementary material Table 6**. Clearly, this dataset can be labeled with actions and our method can obtain great performance in this dataset. **Q6: The organization of the experiments section is not convincing.** A6: Due to the page limitation, most of the experimental results are put into supplementary material, including ablation studies of backbones, additional results in the ETH/UCY dataset, etc. We will add more results in the main paper. **Q7: The authors claim their models have “superior interpretability” (L15) ...** A7: In Figure 5, we have added some explanation about the logic rule from the NBA dataset to show the superior interpretability of our method. Each rule can vividly manifest the corresponding actions of each player. Additionally, we also show more generated rules **in the attached PDF Table 1**. These rules also conform to the common sense. All equations will be numbered in the final version. **Q8: The Haggling dataset [a] contains triadic interactions which could have been utilized.** A8: We choose trajectory data because it’s easy to define spatial-temporal relation predicates and object property to explore some intrinsic logic rules. Moreover, following the reviewer’s advice, we also evaluate our method in the Haggling dataset, and the results are shown in the following table. Our method achieves promising results for both short-term and long-term predictions of complex activities. **More results are shown in the attached PDF Table 3 and Figure 1.** |Time|80ms|160ms|320ms|400ms|560ms|640ms|720ms|1000ms| |--|--|--|--|--|--|--|--|--| |Walking|0.38|0.58|0.80|0.89|0.98|1.03|1.11|1.22| |Eating|0.25|0.39|0.60|0.76|0.94|1.01|1.03|1.29| Table: Performance evaluation (in MAE) of comparison methods over on the Haggling (H3.6) dataset. **Q9: Does object mean humans?** A9: Generally, the object set C represents all objects in the specific environment. In our settings, all entities are human, so the “object” means human. **Q10: Could the authors elaborate on how k is encoded?** A10: Note that we have defined several event types k in Line 238 and Line 243. These actions can be encoded from the history of each entity as a booling logic variable. --- Rebuttal Comment 1.1: Comment: Thank you for your response. The rebuttal as well as the other reviewers have addressed most of my concerns and I will change my recommendation to "accept". --- Reply to Comment 1.1.1: Title: Thank you! Comment: Dear Reviewer zUPP: We would like to express our gratitude for your acknowledgment of our work! Do you mind updating my score accordingly in the openreview system? We'll incorporate the new results & comparison in the revised version!
Summary: This study presents a logic-informed, knowledge-driven modeling framework designed to predict and understand human movements, based on the analysis of their trajectories. It takes into account that human behaviors are commonly guided by intentions, desires, and spatial relationships with surrounding objects. The research integrates a set of spatial-temporal logic rules, derived from observational data, into the model, utilizing an expectation-maximization (EM) algorithm to infer model parameters and rule content. The performance of the model is evaluated using datasets of NBA basketball player movements and pedestrian trajectories, demonstrating high levels of interpretability, predictive accuracy, and efficacy in its results. Strengths: - The objectives and motivation are well-articulated, offering a clear understanding of the reasons behind the research and the problem it aims to address. - The performance of the proposed model is outstanding when compared to other state-of-the-art methods signifying an advantage of the work. - The proposed method is based on theoretical principles (EM method). It's not a product of arbitrary choices, but rather a thoughtful construction based on the EM algorithm, which adds credibility to the model's outcomes. Weaknesses: - If additional specifics regarding the architecture could be provided, such as the input and output formats utilized by the transformer, it would be incredibly beneficial for a deeper understanding of the method. - The evaluation of this work is currently based on only two datasets. If the authors could expand this to include results from more universally employed pedestrian trajectory prediction datasets (ETH [i] or UCY [ii] ), it would significantly strengthen the claim of the model's generalizability. [i] Improving data association by joint modeling of pedestrian trajectories and groupings ECCV10. [ii] Crowds by example. In Computer Graphics Forum. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - There is a typo in equation (3); the correct symbol should be $\varphi_f$ instead of $\phi_f$ . - The references for the datasets utilized in the experiments are currently absent and need to be included. - Where did you get the heatmap depicted in Fig3 (b) (d)? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: If my understanding is correct, the spatial-temporal logic rule space, encompassing both static spatial and dynamic spatial relations, needs to be explicitly defined. This implies that prior knowledge or assumptions about the environment and the interactions within it are required to structure and define these rules. Thus, it's not a process automatically derived from the data but requires human intervention and understanding to set up appropriately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your recognition of our work and your insightful reviews! We hope our responses can address your questions. Our responses are listed below. **Q1: If additional specifics regarding the architecture could be provided, such as the input and output formats utilized by the transformer, it would be incredibly beneficial for a deeper understanding of the method.** A1: Thanks for your suggestion. The input of our framework is the historical trajectory coordinates of entities, and the output is the predicted future trajectory. Coordinates as input would be first encoded into a vector by three-layer MLPs before being fed into the Transformer, and a ReLU nonlinearity following each of the first two layers. The dimensions of keys, values, and queries are all set to 256, and the hidden dimension of feed-forward layers is 512. The number of heads for multi-head attention is 8. We will add these details in a revision. **Q2: The evaluation of this work is currently based on only two datasets. If the authors could expand this to include results from more universally employed pedestrian trajectory prediction datasets (ETH [i] or UCY [ii] ), it would significantly strengthen the claim of the model's generalizability.** A2: We have evaluated our method and other state-of-the-art methods in the ETH/UCY dataset, and **due to the page limitation** in the main paper, the results are shown in **the supplementary material Table 6**. Below we show the results for the reviewer’s convenience. These new experimental results show that we still achieve superior results in most metrics. | Methods | ETH | HOTEL | UNIV | ZARA1 | ZARA2 |AVG| |--|--|--|--|--|--|--| |Y-Net |0.28/0.33|0.10/0.14|0.24/0.41|0.17/0.27|0.13/0.22|0.18/0.27| |MID |0.39/0.66|0.13/0.22|0.22/0.45|0.17/0.30|0.13/0.27|0.21/0.38| |NSP-SFM |0.25/0.44|0.09/0.13|0.21/0.38|0.16/0.27|0.12/0.20|0.17/0.24| |Social SSL |0.69/1.37|0.24/0.44|0.51/0.93|0.42/0.84|0.34/0.67|0.44/0.85| |Social Implicit|0.66/1.44|0.20/0.36|0.32/0.60|0.25/0.50|0.22/0.43|0.33/0.37| |Social-VAE |0.41/0.58|0.13/0.19|0.21/0.36|0.17/0.29|0.13/0.22|0.21/0.33| |ABC+ |0.31/0.44|0.16/0.21|0.25/0.47|0.21/0.28|0.20/0.26|0.23/0.32| |Ours |**0.22/0.30**|**0.07/0.13**|**0.16/0.34**|**0.14/0.25**|**0.07/0.16**|**0.13/0.24**| Table: Quantitative results (ADE20/FDE20) of trajectory prediction in ETH/UCY dataset. **Q3: There is a typo in equation (3); The references for the datasets utilized in the experiments are currently absent and need to be included.** A3: These typos and references (SDD dataset [1] and ETH [2]/UCY [3] dataset) have been modified in a revision. [1] Robicquet A, Sadeghian A, Alahi A, et al. Learning social etiquette: Human trajectory understanding in crowded scenes[C]//Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part VIII 14. Springer International Publishing, 2016: 549-565. [2] Pellegrini S, Ess A, Schindler K, et al. You'll never walk alone: Modeling social behavior for multi-target tracking[C]//2009 IEEE 12th international conference on computer vision. IEEE, 2009: 261-268. [3] Lerner A, Chrysanthou Y, Lischinski D. Crowds by example[C]//Computer graphics forum. Oxford, UK: Blackwell Publishing Ltd, 2007, 26(3): 655-664. **Q4: Where did you get the heatmap depicted in Fig3 (b) (d)?** A4: Following [4][5], we utilize Fig. 3 to show the heatmaps of goals and waypoints for t=30 seconds in the future from the estimated distribution. These heat maps are achieved through an estimated distribution map obtained by the decoder in our framework. [4] Mangalam K, An Y, Girase H, et al. From goals, waypoints & paths to long term human trajectory forecasting[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 15233-15242. [5] Jacobs H O, Hughes O K, Johnson-Roberson M, et al. Real-time certified probabilistic pedestrian forecasting[J]. IEEE Robotics and Automation Letters, 2017, 2(4): 2064-2071. --- Rebuttal Comment 1.1: Comment: Thank you for the comprehensive rebuttal. It effectively addresses my concerns. After considering the author's response and comments from other reviewers, I will maintain my initial rating.
Rebuttal 1: Rebuttal: Dear Reviewers, Area Chairs, and Program Chairs, We are greatly thankful for the insightful comments and suggestions, which are very helpful for us to further improve this work. We are very excited that the reviewers hold positive feedback and find our work "well-articulated, offering a clear understanding....the performance of the proposed model is outstanding" (Reviewer gEM9), "the explainability of the model is impressive and well-visualized" (Reviewer zUPP), "introduces a novel framework that incorporates separate steps for rule generation and logic reasoning" (Reviewer 7WeJ), "the rule definition and rule inference framework is quite general and can find wider applications than the experiments" (Reviewer AEbP). In our response, we provide additional figures and tables in the attached PDF for visualization. Recognizing the importance of additional explanations and experiments, we are committed to making these enhancements. To ensure transparency and clear communication, we've summarized our main responses as follows: - **Experiment on ETH/UCY dataset.** We would like to address the fact that some reviewers may not have had the opportunity to review our supplementary material, leading to concerns regarding the ETH/UCY dataset. It's worth noting that we have indeed evaluated our method alongside other state-of-the-art techniques using the ETH/UCY dataset. However, owing to page constraints in the main paper, we've presented the results in *Table 6 of the supplementary material*. - **Experiment on 3D scenarios.** In response to the reviewer's suggestion, we have extended our evaluation to the 3D Haggling dataset, encompassing diverse action types. The outcomes are detailed in *Table 3* and *Figure 1* in the attached PDF. Encouragingly, our method demonstrates noteworthy performance for both short-term and long-term predictions, particularly in intricate activity scenarios. - **Experiment on synthetic datasets.** We use the approach outlined in [1] to test our model's ability to uncover rules on synthetic datasets with known ground-truth rules and weights. We compare the learned rule weights with the actual ones and report the Mean Absolute Error (MAE) as a measure of accuracy. - **Ablation study.** Based on the feedback from reviewers, we have included more ablation studies in our work. These studies cover aspects such as rule selection, discretization rates, and variations in different backbones. - **Clarification about some important concepts.** We have revised our explanation of "logic" and "interpretability" based on the comments provided by the reviewers. Beyond the academic contributions presented in the paper, our approach also offers practical significance. We introduce a feasible and differentiable algorithm capable of simultaneous learning of rule content and model parameters from observational data. Our model directly utilizes precise, detailed, and irregularly-spaced action times and original 3D coordinates as inputs. Through extensive experimentation on real datasets, we have showcased our model's strong performance in human action prediction and explanation. We hold the view that this distinct approach has the potential to inspire a host of future research endeavors. [1] Li S, Feng M, Wang L, et al. Explaining point processes by learning interpretable temporal logic rules[C]//International Conference on Learning Representations. 2021. Pdf: /pdf/4f004cf281a3f7bb4cf1af0656f7218dea505391.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Diffusion Model-Augmented Behavioral Cloning
Reject
Summary: The paper proposes a new algorithm for behavior cloning (BC) where the BC learning objective is modified with a diffusion modeling loss that models the joint state-action distribution of the expert data. The paper demonstrates the benefits of modeling both the conditional probability and joint probability of the expert distribution. The results of the proposed method have been compared with baselines modeling both conditional probability and joint probability distributions and have been supported with appropriate ablation studies. Strengths: - The paper is well-written and easy to understand. - The problem has been well motivated. The authors claim that while solely modeling the conditional probability p(a|s) of the expert data struggle to generalize to unseen states during training, solely modeling the joint probability can improve generalization to unseen states at the cost of high inference time. Further, though generative models provide encouraging results in modeling stochastic and multimodal behaviors, they struggle with manifold overfitting (shown in Fig. 4(c)). Accordingly, the paper proposes a loss function that combines modeling the conditional and joint probabilities of the expert data while using a state-of-the-art generative model (diffusion model) to benefit from their superior behavior modeling capabilities (as shown in [1] which has also been cited in the paper). - An interesting feature of the proposed method is that since it only uses a diffusion model to guide the learning of conditional probability, it is able to circumvent the issue of high inference time associated with the noising and denoising procedure in diffusion models (mentioned in Sec. 5.3). - The primary results have been compared with baselines modeling both conditional probability and joint probability distributions. Table 1 shows that DBC outperforms BC (conditional probability), Implicit BC (joint probability), and Diffusion policy (a state-of-the-art generative model) on all tasks except the Walker. However, since DBC performs comparably to the best performing baseline (BC) (the authors provide some justification for this in Sec. 5.3 Locomotion). - The paper provides interesting insights on modeling high dimensional action spaces, inference efficiency, generalization to unseen goals, and manifold overfitting. Additional ablation studies to justify design choices such as the choice of the generative model, loss coefficient value, and effect of the normalization term have also been provided. [1] Chi, Cheng, et al. "Diffusion policy: Visuomotor policy learning via action diffusion." arXiv preprint arXiv:2303.04137 (2023). Weaknesses: - The limitations of the proposed approach can only be found in the limitations. Though I believe that is fine given space limitations, it would be nice to mention it briefly in the main paper for completeness. - Since the paper focuses on imitation learning from expert data, it would be interesting to also add *Action Chunking with Transformers* (ACT) [1] as a baseline. ACT also uses a generative model (a conditional variational autoencoder) as a policy and given the reported results, it would be interesting to see how DBC compares with ACT. I understand that the paper does compare with a VAE. However, ACT with its transformer encoder and decoder would be an interesting baseline to compare with. - I am curious about how the performance of the proposed method scales with the dataset size. Specifically, since diffusion models seem to be “data hungry” (based on results from the computer vision and NLP community), it would be interesting how this method scales with dataset size. Since this method solely uses a diffusion model to guide the learning of policy, I believe a study of differences in the performance of DBC and the baselines with different amounts of training data might give interesting insights. [1] Zhao, Tony Z., et al. "Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware." arXiv preprint arXiv:2304.13705 (2023). Technical Quality: 3 good Clarity: 3 good Questions for Authors: It would be great if the authors could address the points mentioned in the “Weaknesses” section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. However, the limitations and societal impact of the work can only be found in the appendix. It would be nice to either have a brief mention or a reference to the appendix section in the main paper for completeness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thorough and constructive comments. Please find the response to your questions below. **Q**: The limitations of the proposed approach can only be found in the limitations. Though I believe that is fine given space limitations, it would be nice to mention it briefly in the main paper for completeness. **A**: We thank the reviewer for the suggestion. We will revise the paper and incorporate it. **Q**: Comparison to Action Chunking with Transformers (ACT) [1] **A**: We thank the reviewer for pointing out this interesting paper. We will revise our paper and discuss this work. We believe ACT mainly focuses on fine-grained manipulation tasks with image-based input, which differs from our setup learning from vectorized states and actions. Moreover, while our proposed method and the baselines in this work consider learning from single time-step state-action pairs, ACT aims to model long-horizon sequences with Transformers, and therefore its contributions are orthogonal to our work. It would be interesting to combine our work and ACT in the future. We would like to also note that ACT was published in July 2023 at RSS, which makes it impossible for us to include it by the time we submitted our work (May 2023). **Q**: Varying dataset sizes **A**: We thank the reviewer for the insights. We agree that studying how the performance of our method and the baselines varies given different amounts of expert data would be informative. To answer this question, we conducted experiments in the Maze environment using 0.25, 0.5, and 0.75 fractions of the original dataset size. For each method, we used the same set of hyperparameters as reported in the main paper. The results are shown below. | Method | 0.25 | 0.5 | 0.75 | 1.0 | |---|---|---|---|---| | BC | 66.59 (3.80) | 68.06 (5.47) | 73.81 (6.15) | 79.35 (5.05) | | Implicit BC | 58.35 (7.71) | 63.12 (4.32) | 73.78 (4.66) | 81.43 (4.88) | | Diffusion Policy | 61.48 (3.40) | 67.75 (5.77) | 71.05 (4.31) | 73.34 (5.30) | | DBC (Ours) | **67.53** (4.42) | **73.83** (4.87) | **82.42** (2.90) | **86.99** (2.84) | The experimental results show that our proposed method DBC outperforms the baselines across different dataset sizes. Particularly, with a 0.5x and 0.75x reduction in dataset size, the diffusion model maintains satisfactory performance, leading to superior performance. When reducing the dataset size to 0.25x, we observed a slightly deteriorated performance of the learned diffusion model, which resulted in a smaller performance gap between DBC and BC. **References** [1] Zhao et al. "Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware" RSS 2023 --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: I thank the authors for the rebuttal. All of my concerns have been addressed.
Summary: The submission proposes an imitation learning method optimizing a loss that is a weighted sum of a behavioral cloning loss and a loss based on a diffusion model. The diffusion model loss penalizes the policy for generating actions that are unlikely under the diffusion model, which is pre-trained to maximize the likelihood of joint state-action pairs sampled from the data. Strengths: Originality: I am not aware of any previous work combining a behavioral cloning loss with a diffusion model loss. Quality: I think there is one interesting "key trick" that seems to be required for the method to work, which is using the diffusion-model-loss incurred by the expert demonstration as a baseline. I can see how if the diffusion model is wrong somewhere, it could inappropriately penalize the policy for doing something reasonable. Only penalizing the loss in excess of the expert's loss mitigates the effect of errors in the diffusion model. The paper's strongest point is probably the experimental results in section 5.3. A reasonably diverse set of tasks were evaluated, and the proposed method performs significantly better than the baselines in most cases. I also appreciated that the experiments were targeted towards testing specific hypotheses—e.g., the results in section 5.5, which validate the hypotheses that the BC loss is necessary, and that the diffusion model is the best choice of EBM for this purpose. Significance: The relative simplicity of the approach, along with positive experimental results in non-trivial tasks, could encourage more research into combining BC with EBM-type losses. Weaknesses: Originality: Although the specific combination of BC and DDPMs is novel, there is some precedent for combining BC losses with other losses—for example [A], which combines a BC loss with the GAIL loss. [A] Jena et al. Augmenting GAIL with BC for sample efficient imitation learning. https://arxiv.org/pdf/2001.07798.pdf Clarity: Some key aspects of the submission's presentation could be improved. In particular, the mathematical notation is lacking in key places. Section 4.2.2 is missing notation to describe where we are taking various expectations, and with respect to which distributions. For example, should eq. 5 read $E_{(s,a) \sim D, a \sim \pi(s)} \max(..., 0)$ or $\max (E_{(s,a) \sim D, a \sim \pi(s)} \dots, 0)$? Are any expectations taken with respect to $s$ sampled from the DM? I believe the answer is no, but I had to check the algorithm description in the appendix to be sure. The answers to these questions should be obvious from the math, but this is unfortunately not the case. Another point of confusion is in section 4.3, which mentions "jointly optimizing the proposed diffusion model loss." I generally interpret this to mean simultaneously optimizing the parameters of all the models, but this is not the case. I again had to check the algorithm description in the appendix to verify that the method is strictly a two-phase method, where the DM is pretrained and fixed before optimizing eq. 6. Quality: One of the paper's main weaknesses is confusion as to the motivation for the method and why the method seems to work. Some of the explanations offered by the paper include: 1. Diffusion models / EBMs generalize better because they model the joint probability p(s,a) as opposed to the conditional probability p(a|s) (lines 52-54). 2. Combining BC with an EBM helps because EBMs alone suffer from "manifold overfitting." (lines 127-128) Both of these explanations seem a bit tenuous. As for the first statement (EBMs generalize better because they model the joint probability), it is simultaneously vague, debatable, and it is unclear how that property would benefit the method, even if it were true. It is stated (line 113) that "These methods demonstrate superior generalization performance in diverse domains," but this statement seems too vague to be useful. Even assuming that modeling the joint probability produces better generalization in some sense, how does that translate to better results in the proposed method? No plausible mechanism is given for this. For example, I could imagine that if one were to sample joint state-action configurations from a good model, and then train behavioral cloning using those state-action pairs, then that could conceivably perform better than training BC on raw data, because sampling from generated states may serve as a form of data augmentation. However, this is not what the method does, according to the algorithm description in the appendix—states are sampled strictly from the data distribution. As for the statement about manifold overfitting, it is true that this is theoretically an issue with EBMs—however, there is a trivial solution for this that works fairly well, which is to perturb the training examples with a small amount of random noise to break any manifold constraints in the data. So, it is not necessary to add a BC loss to EBMs in order to solve the manifold overfitting problem. That said, I do believe it is plausible that combining the BC and DM losses helps, but not for any of the reasons above. A more plausible explanation for why the method works is that the loss optimizes (ignoring eq. 4) an approximation to "forward" KL divergence plus "reverse" KL divergence—since the DM loss is a lower bound on the log-likelihood of the "data," where the "data" in this case consists of samples from the model. The forward KL divergence encourages mode coverage, at the expense of occasionally producing bad samples, whereas the reverse KL has the opposite properties—it produces "good" samples, at the expense of losing modes. Therefore, these losses have complementary characteristics that benefit each other when combined. However, one has to be careful not to inadvertently penalize good samples in the reverse KL term, which is probably why adding the human baseline in equation 5, helps. Notably, imitation learning based on reverse KL divergence and more general f-divergences has been previously explored, e.g., in [B]. This would make one of the methods suggested in [B] a fair comparison in experiments. I noted a few other issues with the experiments. As stated earlier, I am suspicious of the idea that it is essential that the DM model the joint probability as opposed to the conditional probability—it seems plausible to me that the method would work just as well if the DM were trained as a conditional model (since the method doesn't use states sampled from the DM), so it would be interesting to test this experimentally. As also mentioned previously, I believe the manifold problem can be trivially solved by adding a small amount of random noise to the training examples. Doing so may significantly affect the results in Table 2—i.e., adding noise would probably boost the performance of the "without BC" column. Similar comments apply to the "manifold overfitting" results in section 5.4. I think this is an important experiment to run, because it may weaken one of the core claims, which is that adding BC to EBMs helps because it helps address the manifold overfitting problem. [B] Ke, Liyiming, et al. "Imitation learning as f-divergence minimization." Algorithmic Foundations of Robotics XIV: Proceedings of the Fourteenth Workshop on the Algorithmic Foundations of Robotics 14. Springer International Publishing, 2021. Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: How do the results change (esp. the manifold overfitting experiments) when the data is perturbed with a small amount of random noise? Have you tried comparing the method to BC with some sort of simple data augmentation? Have you considered comparing to another method that optimizes another type of f-divergence (as exemplified by [B])? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 2 fair Contribution: 2 fair Limitations: I didn't see much discussion in the way of limitations. What about the efficiency and stability of training? Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thorough and constructive comments. Please find the response to your questions below. **Q**: Originality **A**: A key contribution of our method is to derive an imitation learning policy that combines the advantages of modeling the conditional and joint probability of the expert demonstrations. To this end, we proposed to train a policy to optimize the BC loss that models the conditional probability and the proposed diffusion model loss that models the joint probability. We believe this idea is original in imitation learning without interacting with environments. On the other hand, the motivation of [1] is to improve the sample efficiency (i.e., environment interaction) of GAIL with the BC loss, which differs from our motivation. Also, while our method does not need to interact with environments, [1] requires interaction with environments. We will discuss this work in the revision. **Q**: Clarity **A**: We thank the reviewer for raising the concern regarding the clarity. We will revise our paper to address the potential confusion. Specifically, we will move the algorithm in the appendix to the main paper and revise the equations to clarify them. **Q**: Conditional diffusion models **A**: “The DM was trained as a conditional model” mentioned by the reviewer is precisely the Diffusion Policy [2] we extensively evaluated in our experiments. Specifically, the Diffusion Policy generates an action from a random noise given a state by learning to denoise a noisy action conditioned on a given state. The experimental results show that our proposed DBC outperforms Diffusion Policy in Table 1, Table 6, Table 7, Table 8, Figure 6, and Figure 7. This suggests that leveraging a learned diffusion model as a loss source together with the BC loss to train a feedforward policy can achieve better performance than learning a conditional diffusion model (i.e., Diffusion Policy). **Q**: Learning diffusion models with noisy expert data to address manifold overfitting **A**: We thank the reviewer for the insightful suggestion. We have conducted the following two sets of experiments to answer the reviewer's questions. - The spiral dataset We aim to see if adding noise to expert data can resolve manifold overfitting in the experiment presented in Section 5.4. To this end, we use the spiral dataset in Section 5.4. As suggested by the reviewer, we train a diffusion model on the spiral dataset while adding random noise sampled from [-0.1, 0.1] uniformly to sampled states and actions. Then, we guide policy learning with this diffusion model. The results are presented in the PDF file in Author Rebuttal by Authors. According to the action distribution modeling result in Figure 2(a), the diffusion model learning with added noise does not significantly resolve the manifold overfitting issue. As a result, the learned diffusion model still fails to accurately guide the policy, resulting in diverging trajectories presented in Figure 2(b). This indicates that adding noise to expert data can not trivially solve manifold overfitting and justifies the necessity of modeling conditional probability. - The Maze dataset To further examine the effect of adding noise, we train a diffusion model on the Maze expert dataset while adding random noise sampled from [-0.1, 0.1] uniformly to sampled states and actions. The policy guided by this diffusion model achieves a 52.78% ± 11.32% success rate, which underperforms learning from the clean dataset (53.51% ± 4.20%). Note that our proposed DBC achieves a success rate of 86.99% ± 2.84%, demonstrating the advantage of employing the BC loss. The results of the two experiments show that adding noise to expert data when learning diffusion models may not be an effective solution to manifold overfitting. **Q**: Data augmentation **A**: Following the reviewer’s suggestion, we leverage a diffusion model learning from an expert dataset to generate state-action pairs to augment the dataset. Specifically, we use 18525 state-action pairs from the Maze dataset to train a diffusion model and then generate 18525 samples with the diffusion model. We use all the real and generated state-action pairs to learn a BC policy. The policy achieves a success rate of 81.35% ± 4.06%, slightly better than the BC policy learning from the original dataset with a success rate of 79.35% ± 5.05%. Note that our proposed DBC performs the best with a success rate of 86.99% ± 2.84%, justifying the effectiveness of using the diffusion model as a loss source instead of using it for data augmentation. **Q**: Forward and reverse KL divergence **A**: We thank the reviewer for sharing this explanation of why our proposed method works. We agree with your insights on how our proposed method optimizes the forward and reverse KL divergence simultaneously. We conducted derivations and validated the above statement. From our deviation, the reverse KL divergence between the expert’s and agent’s trajectories is equivalent to the negative likelihood of data samples from the agent’s trajectories, on which the DM loss is a lower bound. We believe this perspective complements, instead of conflicts with, the intuitions and observations presented in our work. We aim to combine the advantages of modeling the conditional and joint probability of expert data. We empirically verify the disadvantages of solely modeling conditional probability or joint probability in Section 5.4. We also verify the improved generalization performance of our method generalizes in Section E. We highly appreciate your input and will incorporate your explanation of why our proposed method works and the reference [3] in the revision. **References** [1] Jena et al. “Augmenting GAIL with BC for sample efficient imitation learning” CoRL 2020 [2] Cheng et al. “Diffusion Policy: Visuomotor Policy Learning via Action Diffusion” RSS 2023 [3] Ke et al. "Imitation Learning as f-Divergence Minimization" WAFR 2020 --- Rebuttal Comment 1.1: Title: Rebuttal response Comment: Thank you for trying the experiment to add noise to address the manifold overfitting issue. However, I don't find the results that convincing—I think either one or both of the following hypotheses could explain the fact that adding noise only degraded performance: 1. The reason for the poor performance is not manifold overfitting, so adding noise does not help. 2. The magnitude of the added noise was too great. Ideally, one should try a variety of noise levels and select the one that yields the best performance. Looking at figures 1b and 2b in the rebuttal PDF, I would expect that the results of $\pi_{DM}$ would vary continuously as a function of the noise level. The fact that there is a huge discrepancy between the two results would seem to indicate that the noise level is too high. If manifold overfitting is the problem, I would be fairly surprised if adding noise would not help. Manifold overfitting arises because the learner receives unbounded benefit in log-likelihood from squeezing the model PDF onto the manifold, regardless of whether it fits the data well otherwise. Adding noise places an upper limit on the amount of benefit we can receive from solely squeezing the PDF onto the manifold. So, I don't see why it shouldn't solve the problem. --- Reply to Comment 1.1.1: Title: Re: Rebuttal response Comment: ### Addressing Manifold Overfitting by Adding Noise to Expert Data **Modeling Expert Distribution** As suggested by the reviewer, we have conducted additional experiments to extensively evaluate diffusion models trained with various levels of noise added to the expert actions from the spiral dataset. Since providing result figures is not allowed during this phase, we calculate the average MSE distance between expert actions and the reconstruction of the trained diffusion models, which indicates how well diffusion models capture the expert distribution. We report the result below. | Noise level | 0 | 0.002 | 0.005 | 0.01 | 0.02 | 0.05 | 0.1 | | --- | --- | --- | --- | --- | --- | --- | --- | | MSE distance | 0.0162 | 0.0152 | **0.0148** | 0.0164 | 0.0188 | 0.0300 | 0.0491| We observe that applying a noise level of 0.005 results in the lowest MSE distance (0.0148) and indicates that the diffusion model best fits the expert data with this magnitude of injected noises, which is slightly better than directly modeling the original distribution (i.e., with a noise level of 0). This finding aligns with the reviewer’s statement (i.e., adding noise to expert distribution can alleviate the manifold overfitting issue). However, we would like to point out that adding noise still does not entirely address the issue, and there is still a discrepancy between the learned and expert distributions. **Guiding Policy Learning** Then, we investigate the performance of using the learned diffusion models to guide policy learning. Specifically, we train policies to optimize the diffusion model loss $\mathcal{L}\_{\text{DM}}$ provided by either the diffusion model learning from a noise level of 0 and the diffusion model learning from a noise level of 0.005, dubbed $\pi\_{\text{DM-0}}$ and $\pi\_{\text{DM-0.005}}$, respectively. We evaluate the performance of the policies by rolling out each policy and calculating the distance between the end location of the policy and the expert end location. A policy rollout is considered successful if the distance is not greater than 0.1. We report the average success rate of each policy over 100 episodes below. | Method | $\pi\_{\text{DM-0}}$ | $\pi\_{\text{DM-0.005}}$ | $\pi\_{\text{BC}}$ | | --- | --- | --- | --- | | Success Rate | 27% | 44% | 93% | The result suggests that the diffusion model learning from expert distribution added with a preferable magnitude noise can also better guide policy learning, achieving a success rate of 44%, outperforming the original diffusion model that suffers more from the manifold overfitting with a success rate of 27%. Yet, directly learning to model the conditional probability (i.e., $\pi\_{\text{BC}}$) achieves a much higher success rate of 93%. This result verifies the advantage of modeling the conditional probability on this task, which motivates us to incorporate $\mathcal{L}\_{\text{BC}}$ in our proposed learning objective, instead of solely optimizing $\mathcal{L}\_{\text{DM}}$. We sincerely thank the reviewer for pointing out the importance of selecting a preferable magnitude of noises and helping us better investigate the mechanism of our proposed method. We will revise our paper to incorporate these findings. ### Other Concerns Finally, since the reviewer’s response only discusses the manifold overfitting experiment, we would like to know if other concerns in the original reviews were addressed by our rebuttal. If not, we would love to have the chance to discuss further with the reviewers. Thank you very much.
Summary: The paper proposes a method to augment a behavior cloning (BC) agent with additional diffusion loss. The goal is to leverage the conditional probability learned by the BC loss and the joint probability learned by the diffusion loss. The diffusion loss is calculated using the prediction error of a pre-trained diffusion model on both the agent's and expert's state-action pairs. Experimental results demonstrate that this method outperforms baseline methods, achieving improved performance. Strengths: 1. Effective Experimental Results: The paper provides experimental results that showcase the superior performance of DBC compared to several baselines, including BC, Implicit BC, and Diffusion Policy. 2. Leveraging the Advantage of Offline Learning: The proposed method operates purely in an offline setting, eliminating the need for interaction with the environment. This reduction in training complexity is an advantage. Weaknesses: 1. Sensitivity to the $\lambda$ Parameter: The paper introduces a $\lambda$ parameter for BC loss and diffusion model loss balancing, but the analysis is limited to the maze environment. It is crucial to investigate and analyze the performance of the algorithm across various environments when the $\lambda$ parameter is modified. 2. Lack of Theoretical Analysis: The paper lacks a theoretical analysis of the proposed diffusion model loss. Providing a theoretical foundation for the diffusion loss would enhance the paper's scientific rigor. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Mechanism of Knowledge Transfer: The paper does not sufficiently explain how the diffusion model loss leads to agent's improved generalizability to unseen states. While the modeling the joint distribution of state-action pair can extrapolate/interpolate actions better than original BC on unseen states [1], it remains unclear how the knowledge generated by modeling the joint distribution is transferred to the agent. Since DBC trains the agent with the diffusion model loss on **expert states only**, it is essential to provide a theoretical proof or analysis to address this question. The authors should clarify the mechanism behind the transfer of knowledge from the diffusion model to the agent and explain how this transfer leads to improved performance on **unseen states**. 2. Redundancy of Diffusion Model Loss: The paper describes the proposed diffusion model loss as a measure of how well a state-action pair $(s^e,\hat{a})$ fits the expert state-action pair distribution, where $s^e$ is the state of $(s^e,a^e)$ sampled from the expert's state-action distribution and $\hat{a}\sim \pi(s^e)$ is the action predicted by agent. Additionally, the BC loss $||\hat{a}-a^e||^2$ is used to ensure that the agent's prediction $\hat{a}$ is as close to $a^e$ as possible. Theoretically, the state-action pair $(s^e,a^e)$ can fully minimize the diffusion model loss since it already exists in the expert state-action pair distribution. Consequently, minimizing the BC loss should also minimize the diffusion model loss on all training data (which is the expert data), potentially rendering the diffusion model loss redundant. The authors should address this observation and clarify the necessity of the diffusion model loss despite its potential redundancy. 3. Maximizing Joint Probability vs. Conditional Probability: The paper claims that the agent benefits from the diffusion model loss because it models the joint probability $p(s,a)$. However, maximizing the joint probability reduces to maximizing the conditional probability when the state $s$ is fixed, since the states in training data cannot be modified by the agent. The authors should clarify this discrepancy and provide further insights. 4. Performance in Other Locomotion Tasks: The paper's locomotion task evaluation focuses solely on the Walker2d environment. To establish the generalizability of the proposed method, it is crucial to conduct experiments on other well-established benchmarks, including Hopper, HalfCheetah, Ant, and Humanoid. By evaluating the performance across these mainstream locomotion tasks, the authors can provide a more comprehensive assessment and demonstrate the effectiveness of the proposed method in diverse and challenging environments.
 5. Minor Concern: Conducting experiments with only three random seeds may not provide sufficient statistical significance. Increasing the number of random seeds would strengthen the experimental evaluation. [1] Florence, Pete, et al. "Implicit behavioral cloning." Conference on Robot Learning. PMLR, 2022. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: 1. Limited Task Scope: The experimental evaluation is confined to specific tasks, raising questions about DBC's performance in other types of tasks or more complex environments. 2. Offline Learning Restriction: As an offline imitation learning algorithm, DBC lacks the capability to acquire knowledge through online interaction with the environment. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thorough and constructive comments. Please find the response to your questions below. **Q**: The diffusion model loss coefficient $\lambda$ **A**: As requested by the reviewer, we have conducted an additional ablation study on the diffusion model loss coefficient in the HandRotate environment. The results are shown below. | $\lambda$ | 0.01 | 0.05 | 0.1 | 0.5 | 1 | 5 | 10 | | --- | --- | --- | --- | --- | --- | --- | --- | | Success Rate | 56.00 (4.31) | 57.96 (5.41) | 59.42 (5.24) | 60.73 (4.35) | 60.34 (4.61) | 58.82 (4.92) | 58.25 (4.88) | The results indicate that the performance of our method is reasonably robust to the coefficient, which aligns with the main paper. With this new ablation study, we have verified the effect of the coefficient in both the simplest and hardest environment. **Q**: Theoretical Analysis **A**: The theoretical motivation of our proposed method is included in Section G “On the Theoretical Motivation for Guiding Policy Learning with Diffusion Model”. Diffusion models can optimize the Evidence Lower Bound (ELBO) by minimizing the diffusion loss in Eq. 2. We then normalized the agent diffusion loss with the expert one to calculate the proposed diffusion model loss in Eq. 5. Therefore, the proposed diffusion model loss can be seen as a normalized objective that aims to optimize the ELBO. **Q**: Mechanism of knowledge transfer **A**: The improved generalization performance to unseen states is rooted in jointly modeling expert states and actions. Implicit BC [1] demonstrates joint modeling leads to better interpolating and extrapolating ability, allowing for dealing with unseen states, despite learning only from expert states. In this work, we train a diffusion model to model the joint distribution of expert states and actions. Then, we use this trained diffusion model to guide a policy. Optimizing the diffusion model loss encourages the policy to predict actions that are “recognizable” by the learned diffusion model. One can also view this as minimizing the learned distance between predicted and expert actions captured by the diffusion model. In short, we propose to transfer the knowledge acquired by the diffusion model to the policy by optimizing our proposed diffusion model loss. **Q**: Redundancy of diffusion model loss **A**: The BC loss brings the predicted and expert actions closer by optimizing a heuristic MSE distance. On the other hand, the proposed diffusion model loss guides the policy by providing a learned distance given a pair of predicted and expert actions. This learned distance is obtained by learning a diffusion model to model the expert state-action pairs. As pointed out by the reviewer, when converging optimally, i.e., the policy perfectly fits the expert data, these two distances should be zero. However, optimizing these two different learning objectives during the learning process is not “redundant.” This is evident from our experiments. Table 1 shows that optimizing both $\mathcal{L}\_{\text{DM}}$ and $\mathcal{L}\_{\text{BC}}$ outperforms solely optimizing $\mathcal{L}\_{\text{BC}}$. Also, Table 2 indicates that optimizing $\mathcal{L}\_{\text{BC}}$ and a generative model-guided loss leads to improved performance. **Q**: Maximizing joint probability vs. conditional probability **A**: The joint probability $p(s, a)$ can be represented as the product of the marginal state probability and the conditional action probability using the Bayes Rules, i.e., $p(s, a) = p(s)p(a|s)$. Given a single state $s$, maximizing the joint probability reduces to maximizing the conditional probability. However, we consider learning from an expert dataset with states sampled from an unknown distribution. Therefore, modeling $p(s)$ is still required and cannot be seen as a fixed constant. That said, maximizing the joint probability $p(s, a)$ with the diffusion model is not equal to maximizing the conditional probability $p(a|s)$. **Q**: Performance in other locomotion tasks & limited task scope **A**: As requested by the reviewer, we conducted experiments in two additional environments: HalfCheetah and AntReach. HalfCheetah serves as another commonly used locomotion environment, as suggested by the reviewer. AntReach (adopted from [2]) features a Mujoco Ant agent required to move to a given target location, which combines the characteristics of both locomotion and navigation domains. The results are shown below. | Method | HalfCheetah | AntReach | | --- | --- | --- | | BC | 4871.71 (83.23) | 75.31 (2.58) | | Implicit BC | 1282.09 (466.49) | 39.89 (4.81) | | Diffusion Policy| 4654.70 (81.83) | 70.27 (3.57) | | DBC (Ours) | **4878.92** (76.02) | **78.76** (1.97) | In HalfCheetah, BC and our method (DBC) perform similarly, outperforming other baselines, which aligns with the Walked results presented in the main paper. In AntReach, DBC outperforms the three baselines, verifying the effectiveness of the proposed method. As reported in the Implicit BC paper [1], Implicit BC struggles with action spaces with a dimensionality greater than five and therefore performs poorly in HalfCheetah (dim=6) and AntReach (dim=8). **Q**: Three random seeds **A**: This work addresses the problem of imitation learning without interacting with the environment, which is often formulated as a supervised learning problem (e.g., behavioral cloning). In fact, our method and all the baselines considered in this work are supervised learning methods. Since the training of supervised learning methods is typically more stable than RL, we chose to report results with three random seeds, following our baselines, Implicit BC [1] and Diffusion Policy [3]. **References** [1] Florence et al. “Implicit behavioral cloning” CoRL 2021 [2] Lee et al. “Generalizable imitation learning from observation via inferring goal proximity” NeurIPS 2021 [3] Cheng et al. "Diffusion policy: Visuomotor policy learning via action diffusion" RSS 2023 --- Rebuttal 2: Title: Please engage with author rebuttal Comment: Please engage with the author rebuttal as soon as possible. It is critical for the review process to have engagement from all reviewers, as we cannot easily judge the validity of the original review, nor fairly calibrate across papers, without commentary on the rebuttal. Thank you. --- Rebuttal 3: Comment: Thank you for your rebuttal. I appreciate the effort made in addressing my concerns; however, I still have reservations regarding the points raised. Though the diffusion model might have a better interpolating and extrapolating ability on “unseen states”. But in this work, the agent is not acting with the diffusion model. The agent is trained with the loss calculated by this diffusion model instead. And the agent is only minimizing the diffusion model loss on “expert states”. So it is still unclear how the agent can achieve better performance on “unseen states” after training with the diffusion loss. If training the agent with the BC loss solely and using the diffusion model loss as a performance measurement, will the diffusion model loss reach its optimal value when the BC loss converges? When training the agent with the diffusion model loss $||\hat{\epsilon}(s,\hat{a},n)-\epsilon||^2$, the state $s$ is fixed, the agent is just optimizing the conditional probability $\pi(a|s)$ on the expert’s state, and given that optimizing the BC loss solely can optimize the conditional probability $\pi(a|s)$. It is unclear what’s the difference between training the agent with these two loss in theoretical perspective. --- Rebuttal Comment 3.1: Comment: > Though the diffusion model might have a better interpolating and extrapolating ability on “unseen states”. But in this work, the agent is not acting with the diffusion model. The agent is trained with the loss calculated by this diffusion model instead. And the agent is only minimizing the diffusion model loss on “expert states”. So it is still unclear how the agent can achieve better performance on “unseen states” after training with the diffusion loss. We thank the reviewer for initiating further discussion on the mechanism of knowledge transfer. We propose to transfer the generalization ability acquired by the diffusion model to the policy by optimizing our proposed diffusion model loss, which captures the joint distribution of state-action pairs $p(s, a)$. As noted in our rebuttal, the joint probability $p(s, a)$ can be represented as the product of the marginal state probability and the conditional action probability using the Bayes Rules, i.e., $p(s, a) = p(s)p(a|s)$. We observe that the policy can achieve better performance on unseen states with the distribution of expert states $p(s)$ taken into account, despite only observing the expert states during training. Extensive experiment results support the above observations. In Section 5.4, we show that the policy $\pi\_{\text{DM}}$ modeling the joint distribution $p(s, a)$ can generalize to unseen test regions while the policy $\pi\_{\text{BC}}$ modeling the conditional distribution $p(a|s)$ fails the task; In Section E, we show that our proposed DBC generalizes to unseen initial and target locations better than baselines that only model conditional distributions. Therefore, we believe that with the guidance of the diffusion model, which models the joint distribution $p(s, a)$ and considers the distribution of expert states $p(s)$, the generalization ability of the agent can be improved. We will revise the paper to make this clear. > If training the agent with the BC loss solely and using the diffusion model loss as a performance measurement, will the diffusion model loss reach its optimal value when the BC loss converges? As requested by the reviewer, we have conducted additional experiments investigating the trend of $\mathcal{L}\_{\text{BC}}$ and $\mathcal{L}\_{\text{DM}}$ when selectively optimizing either each one of them or both. Specifically, BC only optimizes $\mathcal{L}\_{\text{BC}}$, DM-only soley optimizes $\mathcal{L}\_{\text{DM}}$, and our proposed DBC optimizes $\mathcal{L}\_{\text{BC}} + \lambda \mathcal{L}\_{\text{DM}}$. In the following tables, we report the loss values of these three methods during training at 500, 1000, 1500, and 2000 epochs, on the Maze task. **BC** | #Epoch | 500 | 1000 | 1500 | 2000 | | --- | --- | --- | --- | --- | | $\mathcal{L}\_{\text{BC}}$ | 0.2403 (0.0200) | 0.1953 (0.0142) | 0.1829 (0.0129) | 0.1726 (0.0121) | | $\mathcal{L}\_{\text{DM}}$ | 0.0333 (0.0307) | 0.0285 (0.0262) | 0.0252 (0.0233) | 0.0228 (0.0217) | **DM-only** | #Epoch | 500 | 1000 | 1500 | 2000 | | --- | --- | --- | --- | --- | | $\mathcal{L}\_{\text{BC}}$ | 0.6067 (0.0349) | 0.5934 (0.0327) | 0.5297 (0.0266) | 0.5671 (0.0452) | | $\mathcal{L}\_{\text{DM}}$ | 0.0394(0.0214) | 0.0273 (0.0186) | 0.0196 (0.0190) | 0.0158 (0.0139) | **DBC (Ours)** | #Epoch | 500 | 1000 | 1500 | 2000 | | --- | --- | --- | --- | --- | | $\mathcal{L}\_{\text{BC}}$ | 0.3413 (0.0266) | 0.3127 (0.0268) | 0.2899 (0.0029) | 0.2698 (0.0174) | | $\mathcal{L}\_{\text{DM}}$ | 0.0401 (0.0342) | 0.0310(0.0247) | 0.0213 (0.0196) | 0.0162 (0.0256) | The results show that even BC only optimizes $\mathcal{L}\_{\text{BC}}$, $\mathcal{L}\_{\text{DM}}$ also reduces. However, $\mathcal{L}\_{\text{DM}}$ of BC converges to a higher value (0.0228), compared to only optimizing $\mathcal{L}\_{\text{DM}}$, where DM-only achieves a $\mathcal{L}\_{\text{DM}}$ value of 0.0158. On the other hand, our proposed DBC can effectively optimize both $\mathcal{L}\_{\text{BC}}$ and $\mathcal{L}\_{\text{DM}}$, demonstrating the compatibility of the two losses, which justifies the proposed combination of the two losses. --- Rebuttal Comment 3.2: Comment: > When training the agent with the diffusion model loss, the state is fixed, the agent is just optimizing the conditional probability on the expert’s state, and given that optimizing the BC loss solely can optimize the conditional probability. It is unclear what’s the difference between training the agent with these two loss in theoretical perspective. Given **a single state $s$**, maximizing $\mathcal{L}_{\text{DM}}$ reduces to maximizing the conditional probability $p(a|s)$. However, we consider learning **an expert demonstration dataset $D$**, where the distribution of states $p(s)$ is unknown, so modeling $p(s)$ is still required and cannot be seen as a fixed constant. From a theoretical perspective, the joint probability $p(s, a)$ can be represented as the product of the marginal state probability and the conditional action probability using the Bayes Rules, i.e., $p(s, a) = p(s)p(a|s)$. In short, our $\mathcal{L}\_{\text{DM}}$ takes $p(s)$ into account to model the joint distribution while $\mathcal{L}\_{\text{BC}}$ optimizes $p(a|s)$ directly. To be more specific, MSE is a “non-learnable” measure, dubbed $\mathcal{L}\_{\text{BC}}$ in our paper. In contrast, the proposed diffusion model loss $\mathcal{L}\_{\text{DM}}$ is a learned distance, which measures the distance between an expert action $a$ and a predicted action $\hat{a}$ given a state $s$. As discussed in the paper and the rebuttal, while $\mathcal{L}\_{\text{BC}}$ and $\mathcal{L}\_{\text{DM}}$ both aim to bring the learner policy $\pi$ to the expert policy $\pi\_{\text{expert}}$, they are not identical. For example, $\mathcal{L}\_{\text{BC}}$ brings $\hat{a}$ closer to $a$ without considering the given state $s$; yet, $\mathcal{L}\_{\text{DM}}$ is calculated based on $(s, \hat{a})$ and $(s, a)$, which take $s$ into account. Note that despite their difference, when $\pi$ converges to $\pi\_{\text{expert}}$, both $\mathcal{L}\_{\text{BC}}$ and $\mathcal{L}\_{\text{DM}}$ converge to 0, indicating that these two losses are not conflicting. We thank the reviewer for the question. We will revise the paper to clarify this.
Summary: This paper presents a method for guiding behavior cloning via state-action joint distribution learning. They train a diffusion model to maximize the log-likelihood of state and action pairs in conjunction with an imitation learning model that learns to mimic expert actions given state observations. They combine the two objectives to obtain a policy that predicts actions that fit the expert joint probability captured by a diffusion model. The authors demonstrate experiments on four continuous control domains, and show ablations to demonstrate the effects of their design choices. Strengths: 1) The paper is well-structured and easy to follow. The experimental setup is clearly described with relevant details. The diffusion model for learning a joint distribution over state-action pairs is novel and is presented concisely. 2) The considered tasks for this method are challenging and results on 4 out of 5 continuous control tasks show improvements over the considered baselines. Weaknesses: 1) Intro: The authors state that implicit behavioral cloning (IBC) learns a joint distribution of state and action p(s,a). This is incorrect. IBC learns the joint “energy” E(s,a) but the learned distributions are still conditionals p(a|s). This is evident from the fact that IBC is trained to maximize the log-likelihood of actions in the dataset, and minimize those of sampled negative actions, given state inputs, and never trained to generate states or minimize the energy of negatively-sampled states. This makes a major claim of this paper incorrect. 2) Section 3.1: Talking about reinforcement learning as a preliminary seems very absurd and misleading for a subsection on imitation learning, and also when the paper has nothing to do with learning from rewards. 3) The core approach of this paper does not seem technically sound to me. Learning two distributions over the same random variables (state and action) seems to bring inconsistency to the probabilistic model where at least one of the two distributions is bound to be approximate. Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: 1) According to the authors, what are some design choices that are critical to learning the joint over two variables of different modalities? 2) Table 2 shows the performance in Maze when using different generative models for guiding policy learning. Can the authors comment on the ranking of all methods in this table, why GANs performed the worst and Diffusion Models the best? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 2 fair Contribution: 2 fair Limitations: The authors have addressed the limitations of their method in the supplementary submission. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thorough and constructive comments. Please find the response to your questions below. **Q**: Implicit BC **A**: The Implicit BC paper [1] defines an implicit model as follows: “We define an implicit model as any composition ($\arg \mathop{\min}\_{y} ◦ E_θ(x,y)$), in which inference is performed using some general-purpose function approximator $E: R^{m+n} \rightarrow R^1$ to solve the optimization problem $\hat{y} = \arg \mathop{\min}\_{y} E(x,y)$.” That said, the implicit model learns the joint energy/distributions of $(s, a)$ pairs with an energy-based model (EBM) $E\_\theta(x,y)$, and the approximator $E$ is a derivative-free optimizer with no learnable parameter. As a result, while the implicit BC framework can be seen as a conditional model that predicts an action $a$ given a state $s$, the learned EBM $E\_\theta(s, a)$ still models the joint distribution of states and actions. Moreover, while the InfoNCE loss leveraged in the Implicit BC framework maximizes the negative log-likelihood of actions, this is one way to implement $E\_\theta(s, a)$. One can use different generative models to approximate $E\_\theta(s, a)$ to model the joint distribution, as the experiments conducted in Section 5.5 in our paper. We thank the reviewer for raising this concern and apologize for the possible confusion. We will revise the paper to make this clear. **Q**: RL preliminaries **A**: We thank the reviewer for the suggestion. We aimed to provide reinforcement learning (RL) preliminaries to readers without such background, but we agree that discussing RL could potentially be distracting given the context. As suggested, we will revise the paper and remove the RL preliminaries. **Q**: Learning two distributions over the same variables (s, a) **A**: In our work, optimizing the BC loss ($\mathcal{L}\_{\text{BC}}$) brings the predicted actions and the expert actions closer in terms of a heuristic MSE distance. On the other hand, the proposed diffusion model loss ($\mathcal{L}\_{\text{DM}}$) guides the policy by providing a learned distance given a pair of predicted and expert actions. This learned distance is obtained by learning a diffusion model to model the joint probability of expert states and actions. When converging optimally, i.e., the policy perfectly fits the expert data, both the above two distances should be zero. Therefore, we believe learning two distributions with these two different learning objectives does not bring “inconsistency.” This is evident from our experimental results. We observe that the BC loss $\mathcal{L}\_{\text{BC}}$ and the diffusion model loss $\mathcal{L}\_{\text{DM}}$ decrease concurrently during the training. Table 1 shows that learning from $\mathcal{L}\_{\text{BC}}$ and $\mathcal{L}\_{\text{DM}}$ (i.e., DBC) outperforms learning from $\mathcal{L}\_{\text{BC}}$ (i.e., BC). Moreover, Table 2 shows that learning from both losses significantly outperforms solely learning the conditional distribution ($\mathcal{L}\_{\text{BC}}$) or learning the joint distribution (i.e., the loss computed from the generative model). The extensive empirical results demonstrate that combining learning conditional and joint probability is beneficial. **Q**: Design choices that are critical to learning the joint over two variables of different modalities **A**: Following previous works [1, 2, 3] that model the joint probability of states and actions, we simply concatenate the vectorized state and action and feed the concatenated vector to an MLP model. When experimenting with our method and the baselines, we found that normalizing states and actions so that the values of two variables have roughly the same value range leads to improved performance. **Q**: The ranking of generative models **A**: Section 5.5 in the main paper compares different generative model-guided policies. We experimented with optimizing only the loss from the generative model (GM-only) and combining the generative model loss with the BC loss (BC+GM), as presented in Table 2. - **GM-only**: The ranking of GM-only roughly reflects the popularity of generative model research over time. VAE and EBM perform reasonably, while GAN outperforms these two and has dominated generative model research around 2017-2021. Then, diffusion models have recently attracted more attention. - **BC+GM**: The performance of BC+GM with each generative model depends not only on the modeling capability of the GM but also if it is compatible with the BC loss. Therefore, their performance is more difficult to explain. Our method (BC+DM) performs best, justifying our design choice (i.e., employing a diffusion model to guide the policy). **References** [1] Florence et al. “Implicit behavioral cloning” CoRL 2021 [2] Kostrikov et al. "Offline reinforcement learning with implicit q-learning" ICLR 2022 [3] Ganapathi et al. "Implicit kinematic policies: Unifying joint and cartesian action spaces in end-to-end robot learning" ICRA 2022 --- Rebuttal 2: Title: Please engage with author rebuttal Comment: Please engage with the author rebuttal as soon as possible. It is critical for the review process to have engagement from all reviewers, as we cannot easily judge the validity of the original review, nor fairly calibrate across papers, without commentary on the rebuttal. Thank you. --- Rebuttal Comment 2.1: Title: Response to rebuttal Comment: I thank the authors for their rebuttal. Please find my response below. > Implicit BC The authors correctly pointed out the formulation of implicit BC by Florence et al., however, I would again like to make the same observation - while the energies are joint, the distribution on the action is a conditional p(a|s). As I pointed out in my review, IBC is trained to maximize the log-likelihood of actions in the dataset, and minimize those of sampled negative actions, given state inputs, and never trained to generate states or minimize the energy of negatively-sampled states. While the authors could decide to learn a joint distribution of actions and observations, using the findings of IBC as a major standing ground for their research is problematic. > Learning two distributions over the same variables (s, a) I note that Reviewer 2m35 raised a similar concern where they pointed out that both losses were theoretically redundant. Having read both the response to that question as well as mine, I absolutely do not align with the authors’ position. In the rebuttal, the authors mention that “BC loss brings the predicted actions and the expert actions closer in terms of a heuristic MSE distance” - MSE is not a heuristic distance, rather the negative log-likelihood of the data in parameterized normal distribution. Diffusion models also minimize the same distance but with a different parameterization. While I believe that minimizing both objectives separately can give us different results, the authors are unable to provide a technically sound argument as to why we expect any benefits at all using both objectives together. Simply pointing out that one is an MSE while the other a learned distance does not put much weight into the argument. Having pointed out theoretical irregularities between this paper and the existing literature, as well as missing technically sound arguments to defend the research, my assessment remains the same. --- Reply to Comment 2.1.1: Comment: > Implicit BC We thank the reviewer for initiating further discussion on Implicit BC. We agree with the reviewer that Implicit BC learns from sampled negative actions given state inputs, and does not learn to generate states nor minimize the energy of negatively-sampled states. We will revise the paper to make this clear to the readers. Specifically, in our revision, we will motivate the proposed method based on the improved generalization performance of modeling state-action joint distribution shown in Section 5.4, where Figure 4 shows that $\pi\_\{\text{DM}}$ exhibits better interpolate and extrapolate ability. That said, we will include and discuss Implicit BC as a baseline, instead of mentioning it as an example of modeling joint distribution or using the findings of IBC as a standing ground for our proposed method. We sincerely thank the reviewer for this detailed feedback, which helps us clarify our work. > Learning two distributions over the same variables (s, a) We thank the reviewer for raising this point. We would like to apologize for misusing the term “heuristic”. To clarify, our intention was to emphasize that MSE is a “non-learnable” measure, dubbed $\mathcal{L}\_{\text{BC}}$ in our paper. In contrast, the proposed diffusion model loss $\mathcal{L}\_{\text{DM}}$ is a learned distance, which measures the distance between an expert action $a$ and a predicted action $\hat{a}$ given a state $s$. As discussed in the paper and the rebuttal, while $\mathcal{L}\_{\text{BC}}$ and $\mathcal{L}\_{\text{DM}}$ both aim to bring the learner policy $\pi$ to the expert policy $\pi\_{\text{expert}}$, they are not identical. For example, $\mathcal{L}\_{\text{BC}}$ brings $\hat{a}$ closer to $a$ without considering the given state $s$; yet, $\mathcal{L}\_{\text{DM}}$ is calculated based on $(s, \hat{a})$ and $(s, a)$, which takes $s$ into account. Note that despite their difference, when $\pi$ converges to $\pi\_{\text{expert}}$, both $\mathcal{L}\_{\text{BC}}$ and $\mathcal{L}\_{\text{DM}}$ converge to 0, indicating that these two losses are not conflicting. Our extensive experiments in the paper show that optimizing a combination of these two losses results in the best performance, compared to solely optimizing each of them. Table 1 shows that DBC ($\mathcal{L}\_{\text{BC}} + \lambda \mathcal{L}\_{\text{DM}}$) outperforms BC ($\mathcal{L}\_{\text{BC}}$). Table 2 shows that optimizing $\mathcal{L}\_{\text{BC}} + \lambda \mathcal{L}\_{\text{DM}}$ outperforms solely optimizing $\mathcal{L}\_{\text{DM}}$. Furthermore, Table 2 demonstrates that this is not specific to diffusion models; instead, combining a learned loss from an energy-based model (EBM) or a variational autoencoder (VAE) with a BC loss also achieves better performance than learning from an EBM or a VAE alone, or optimizing only $\mathcal{L}\_{\text{BC}}$. While we agree with the reviewer that our paper does not provide a theoretical ground for combining the two losses, we still firmly believe that our contributions are solid and novel, given extensive experimental results and analyses.
Rebuttal 1: Rebuttal: This PDF file addresses **Reviewer Pa84**'s question regarding learning diffusion models with noisy expert data to address manifold overfitting. Pdf: /pdf/f74ebc9b873ce34ee0c2f59d5970a52fe24d92e9.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper presents a novel approach in the field of imitation learning. The authors address the challenge of learning from expert demonstrations without access to reward signals from the environment. They propose a framework called Diffusion Model-Augmented Behavioral Cloning (DBC) that combines the benefits of modeling both the conditional and joint probability of the expert distribution. The authors demonstrate the effectiveness of DBC through extensive experiments in various continuous control tasks, including navigation, robot arm manipulation, dexterous manipulation, and locomotion. However, there are certain strengths, weaknesses, limitations, and questions that need to be addressed regarding this work. Strengths: 1) Extensive experiments: The article presents a wide range of experiments conducted on diverse tasks, including navigation, robot arm manipulation, dexterous manipulation, and locomotion. This comprehensive evaluation demonstrates the effectiveness of DBC in various scenarios. 2) Novel approach: The proposed DBC framework offers a promising approach to imitation learning. By combining behavioral cloning with a diffusion model, the authors achieve more stable training compared to methods that combine behavioral cloning with GANs, such as GAIL. 3) Clear comparison: The authors compare DBC with existing baselines, providing a clear understanding of its advantages over other methods in terms of performance and generalization. Weaknesses: 1) Coefficient selection: The combination of the behavioral cloning loss and diffusion model loss in DBC relies on a simple addition of coefficients. However, the sensitivity of these coefficients to different environments should be further investigated. A more elegant approach and thorough ablation experiments on diverse tasks are needed to validate the coefficient selection process. Currently, this paper only provide ablation on this coefficient in a single environment (Maze). 2) Lack of comprehensive comparison: Although DBC is compared with baselines, the article does not provide a comprehensive comparison with other state-of-the-art methods in the field of imitation learning. It would be beneficial to evaluate DBC against a wider range of approaches, including GAIL and other recent advancements, to gain a more comprehensive understanding of its relative performance. This would provide a clearer perspective on the strengths and weaknesses of DBC in comparison to alternative methods. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) How does the proposed DBC framework handle scenarios with complex and high-dimensional state spaces? Are there any specific limitations or challenges encountered in such cases? 2) Are there any plans to extend the evaluation of the diffusion model to other environments beyond the MAZE setting? How does the diffusion model compare to other generative models in terms of performance and efficiency? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: While the article highlights improved generalization performance, there is limited analysis or discussion on the factors contributing to this improvement. A deeper analysis of the generalization capabilities and limitations of DBC would enhance the understanding of its strengths and weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thorough and constructive comments. Please find the response to your questions below. **Q**: The diffusion model loss coefficient $\lambda$ **A**: In the main paper, we chose the Maze environment to ablate the diffusion model loss coefficient since it is fast to run and easier to visualize and analyze. As requested by the reviewer, we have conducted an additional ablation study on the diffusion model loss coefficient in the HandRotate environment. The results are shown in the table below. | $\lambda$ | 0.01 | 0.05 | 0.1 | 0.5 | 1 | 5 | 10 | | --- | --- | --- | --- | --- | --- | --- | --- | | Success Rate | 56.00 (4.31) | 57.96 (5.41) | 59.42 (5.24) | 60.73 (4.35) | 60.34 (4.61) | 58.82 (4.92) | 58.25 (4.88) | The results indicate that the performance of our method is reasonably robust to the coefficient, which aligns with the main paper. With this new ablation study, we have verified the effect of the coefficient in both the simplest and hardest environment. **Q**: Comparisons to GAIL and its extensions **A**: As stated in Section 1 and Section 2, our problem formation differs from GAIL [1] and its extensions, such as WAIL [2] and TRAIL [3]. Specifically, our method only learns from the expert dataset without interacting with the environment, while GAIL-based methods require interacting with the environment and observing the environment dynamics, which may not always be possible. This significant difference makes it unfair to compare our method with GAIL-based methods. In this work, we extensively compared our proposed method to state-of-the-art imitation learning methods that do not require interaction with the environment, including Implicit BC (CoRL 2021) [4] and Diffusion Policy (ICLR 2023 and RSS 2023) [5] [6]. We believe these methods are competitive, and comparing against them is sufficient. **Q**: Handling high-dimensional state spaces **A**: The HandRotate environment used in our experiment features a 68-dimensional state space, which is relatively high compared to the commonly used environments in imitation learning literature. Our method achieves satisfactory performance in this environment with a high-dimensional state space. Therefore, we currently believe that there is no fundamental limitation in applying our method to environments with high-dimensional state spaces. **Q**: More comparisons to other generative models **A**: In Section 5.5, we have compared diffusion models to various generative models, including energy-based models (EBMs), variational autoencoders (VAEs), and generative adversarial networks (GANs) in Maze. The results indicate that diffusion models outperform the rest when used to guide a policy. As requested by the reviewer, we have further conducted an additional experiment to compare different generative models in the HandRotate environment, which is considered the hardest among our environments. The results are shown in the table below. | Method | without BC | with BC | |---|---|---| | BC | N/A | 55.48 (3.97) | | EBM | 5.04 (5.50) | 49.30 (3.86) | | VAE | 0.45 (0.62) | 57.24 (5.58) | | GAN | 53.38 (4.45) | 54.41 (4.38) | | DM | **59.20** (4.23) | **60.73** (4.35) | The results show that diffusion models achieve the best performance with or without combining with the BC loss, which aligns with our findings in the main paper. The GAN performs reasonably, while the learned EBM and VAE fail to guide policy learning alone. As for efficiency, all the generative models require only one forward and backward pass to compute the gradients to update the policy, and therefore they are all reasonably efficient. Note that our method does not utilize the learned diffusion model to generate samples, which is known to be time-consuming due to the iterative denoising process; instead, our proposed method only requires the diffusion model to predict a noise vector in one step from a given noisy state-action pair and update our policy. **Q**: A deeper analysis of the generalization capabilities **A**: As stated in Section 1 and Section 3.3, the improved generalization performance is rooted in modeling the joint probability of expert states and actions. This is inspired by the Implicit BC paper [4], which demonstrates joint modeling leads to better interpolating and extrapolating ability. In our work, we specifically demonstrate the improved generalization performance in Section 5.4, which evaluates the ability to interpolate and extrapolate in a 2D navigation task. Moreover, Section E presents extensive generalization experiments on FetchPick and FetchPush, where the learned policies are required to handle unseen initial states and goal locations, which further justify the generalization ability of our proposed method. According to the results, we believe that the improved generalization ability can be attributed to optimizing our proposed diffusion model loss $\mathcal{L}_\text{DM}$. Specifically, we jointly model the expert state-action distribution with a diffusion model, which has a better ability to interpolate and extrapolate, similar to the energy-based models in Implicit BC [4]. Then, we guide policy learning with this diffusion model, improving our learned policy's generalization ability. **References** [1] Ho and Ermon “Generative Adversarial Imitation Learning” NIPS 2016 [2] Xiao et al. “Wasserstein adversarial imitation learning” arXiv 2019 [3] Zolna et al. “Task-Relevant Adversarial Imitation Learning” CoRL 2020 [4] Florence et al. “Implicit behavioral cloning” CoRL 2021 [5] Pearce et al. “Imitating Human Behaviour with Diffusion Models” ICLR 2023 [6] Cheng et al. “Diffusion Policy: Visuomotor Policy Learning via Action Diffusion” RSS 2023 --- Rebuttal Comment 1.1: Title: Thanks for the the authors' thoughtful response Comment: Thanks for the the authors' thoughtful response. My queries have been clarified, and I believe the response was well-articulated.
null
null
null
null
null
null
Concept Distillation: Leveraging Human-Centered Explanations for Model Improvement
Accept (poster)
Summary: The authors introduce an idea by which a student model can become (de) sensitive to some human-understandable concept in its decision-making process. They find CAVs in a teacher model and then transform those CAVs into the student model feature space and use orthogonal vectors to these CAVs to penalize/incentivize models to make decisions based on those concepts. They improve over baselines on biassed datasets where a concept is known to be the underlying bias. Strengths: + I like their idea of debiasing the student model based on human feedback on biased and unbiased samples. + method shows superior improvement over existing results when evaluated on the biassed dataset. + They visually validate that their model is focusing on more important parts of the images in decision-making. Weaknesses: + I'd expect to see some more analysis of the student's model performance. For example, why does the model still get relatively bad accuracy on the color MNIST? is it still biased toward the colors? can we still debias this model and push to get around 100 accuracies? if not, what are the limitations? + This is a minor weak point but I expect to see some recent but relevant related work in the paper. As the paper is considering an alignment between concepts in different models' features spaces, it reminded me of a recent paper "Text-To-Concept (and Back) via Cross-Model Alignment" where an alignment between features spaces is used to interpret a model's behavior in terms of human-concepts. The general framework is somehow similar to this paper. Furthermore, the idea of considering the normal vector orthogonal to the decision-making half plane is somehow relevant to this paper "Distilling Model Failures as Directions in Latent Space". I want the authors to consider those and other relevant papers. + quality of figures should improve (Fig 4). Technical Quality: 3 good Clarity: 3 good Questions for Authors: My questions are found in the weakness part. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for very positive feedback. We will address the points below: > Analysis on student model's performance The model remains biased due to the severe bias (100%) in the training set, though we improve significantly over prior efforts. This demonstrates the effectiveness of using explainability measures like CAVs to improve a model. One way to completely debase a model on a chosen concept could be to re-destil iteratively, using results from previous iteration as input to the next. Bootstrapping models in such a way will be interesting and fruitful directions for the future. > Regarding Related work Thank you for pointing to the recent papers. We will add them to final version if given a chance. "Distilling Model Failures as Directions in Latent Space" uses SVMs to find directions of bias in a shared image and language space (of CLIP), then trains using bias-conflicting samples generated by models like Dall-E. Training on bias conflicting samples might not always be feasible due to higher annotation and computation costs. One major difference is that their work proposes data augmentation as debiasing strategy whereas we directly manipulate the gradient vectors which is more interpretable. "Text-To-Concept (and Back) via Cross-Model Alignment" is a contemporary work (released after Neurips submission) and is very relevant. It maps the activation spaces of two different models using the CLIP latent space similarity. Due to the generality of the CLIP latent space, this approach is useful to encode certain concepts like ‘cat, dog, man’ etc, but it is not clear how it will work on abstract concepts with ambiguous definitions like ‘shading’ and ‘reflectance’ as seen in the IID problem. We are glad to see such current research and think our work will interest many more. > quality of figures should improve (Fig 4). We will improve the resolution and clarify the details in all the figures. Thank you!
Summary: This paper proposes a methodology for training a model to sensitize or desensitize a specific concept. Particularly, it introduces a concept distillation loss that utilizes Concept Activation Vectors (CAVs) derived from a high-performing teacher classifier with abundant knowledge of the concept, aiming to reduce concept activation in that direction. Additionally, it introduces a prediction loss term based on class prototypes to incorporate global features. The paper also employs an autoencoder to minimize the discrepancy between the latent spaces of the teacher and the student. Through experiments on various biased datasets, it demonstrates the robustness of the concept-distilled student model against biases. Strengths: To the best of my knowledge, this paper is the first to propose the distillation of concept information from a well-trained teacher to a student model. The experimental results provide clear verification that the proposed approach effectively reduces the spurious correlations learned by deep neural networks (DNNs). One particularly interesting result is that even when the teacher model is biased by 100%, the student model still exhibits a reduction in bias. This finding highlights the effectiveness of the proposed methodology in mitigating bias in the student model. Weaknesses: It seems that one limitation is the use of only simple datasets in the experiments and another is that the overall pipeline is somewhat unclear (please refer to the first question of below section), but other than that, there don't appear to be any specific weakness. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) The description of the training pipeline seems somewhat unclear. Has the student model already been trained to achieve a high classification accuracy, and is it now receiving concept knowledge through the prototype loss? If so, wouldn't the latent space of the student model change, requiring constant updates to the module that maps the latent space between the teacher and student models? 2) Regarding the motivation and implementation of concept distillation loss, could it potentially increase the sensitivity to orthogonal concepts as an unexpected side effect? 3) In experiments where there are multiple concepts, how is the loss defined when trying to eliminate bias for all these concepts? Is it defined as an average or debiased one by one? 4) Is it possible to experiment with datasets that have a wider range of concepts? For example, using the Broden concept [1] with ImageNet? 5) Is it feasible to conduct experiments where the activation in a trained teacher model itself is controlled rather than distillation to student? 6) Can additional evaluation be provided to assess how well the autoencoder used for latent space mapping is performing? [1] Bau, David, et al. "Network dissection: Quantifying interpretability of deep visual representations." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: It seems that there are no particular specific limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for his comments and efforts. The observation of the effectiveness of our method in 100% biased teacher is certainly an interesting point. As also explained in our response to Reviewer RoT4y, we believe this happens as we are defining the concepts using our concept sets (different from the biased training samples) and explicitly making the gradients orthogonal to the CAV during learning. > Response to Weakness on usage of datasets: We would like to point that the contemporary literature on debiasing (CDEP, RRR, EG, DFA, EnD, etc) also show results on such datasets (ColorMNIST, DecoyMNIST, BFFHQ). This is because synthetic datasets like ColorMNIST and DecoyMNIST, have extreme and well-defined biases, which are relatively difficult to remove and allow proper analysis of the de-baising method. We additionally introduced a new dataset (TextureMNIST) which is comparatively more challenging and has more real-world bias (textures). Textures being an amalgam of colors and shapes, encompass both scenarios, and CNNs tend to have this bias often as pointed by Geirhos et al. Furthermore, Prior-induction in IID using concepts is a novel task attempted in this field. This requires (de)sensitizing to ambiguous and subtle concepts like reflectance-invariance and illumination-invariance. Our method not only effectively (de)sensitizes the model towards the required concepts (Table 4, Figure 7), it does it in a very complex and large network: CGIID (L269, S:L96). Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F. A., & Brendel, W. (2018). ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. arXiv preprint arXiv:1811.12231. We will now answer all the questions one by one below: * RQ 1: *Pipeline:* We start with a pretrained student network which is trained on the biased training set like ColorMNIST (namely Vanilla network) and thus performs extremely poorly on the unbiased test set with out-of-distribution samples like MNIST. Eg, Vanilla network gets ~0% accuracy in ColorMNIST, 52.84% in DecoyMNIST, and 11.23% in TextureMNIST, as shown in Table 1. We map the pretrained teacher’s activation space to this biased student’s space ONLY for the concept set samples. Mapping step is needed only for CAV projection into the student’s space. In summary, our pipeline is: 0: Mapping teacher space to student space by training the Auto-encoder. 1: CAV Learning using distilled teacher outputs to define concepts, etc. 2: Training for the main task with Concept Distillation. In this step the top part of the system is not used at all 3: Testing or application where the trained model is applied. Only the bottom half of the bottom box is used. We will clarify the same in our paper description and algorithm. *Latent Space Changes:* Yes, the latent space indeed changes gradually with the student’s training but as our distillation converges very fast (in less than an epoch i.e 200-500 iterations), this empirically never observed the drift in any of our experiments. We tried updating CAV every few iterations (50, 100, 200, etc.), and it made no difference at all. Hence we chose constant CAV in all our experiments. * RQ 2: During the initial design phase of our method, we chose the concept and its opposite for CAV estimation (e.g. young women vs old women). We updated this to: concept vs random set, in order to specifically avoid the concern being raised here (e.g. young women vs women and men of all ages; see L244-249). This design choice addressed the possible issue of unexpected side-effects, and we empirically confirmed the same. * RQ 3: It can be defined either way. While we didn't experiment the effect of mutiple biases in classification problems explicitly, we did so in the case of IID. We tried two scenarios in our IID experiments: 1) Setting_1 (L265) wherein branch S was frozen while branch R was being trained for concept of illumination invariance and vice-versa. This represents the case of sequential bias removal. 2) In Setting_2 (L266) wherein both R and S branches were trained together along with their common backbone and losses were simply weighted and added. For the IID problem, we did not observe significant performance differences. Exploring the same for more than two concepts and in multiple problems is one possible extension that we plan to explore in subsequent extension of our work. * RQ 4: We have tried to demonstrate the same with the IID problem (which represents a different task than classification and is trained using a large training set of CGIID with approx. 100k+images). Instead of opting for multiple concepts from [1] for a relatively well-defined problem in classification, we opted to focus on more ‘complex concepts’ like illumination-invariance in an ill-defined problem like IID to gauge the utility of the technique in a more real-world scenario. If necessary, we can try to get additional results using the experiment suggested. * RQ 5: We performed a similar experiment where the distillation was not used and the CAV's were learned in the same model (without proto-types) and reported the result in no distill’ column in Table 5 (L285). We apologize for the unclear explanation and will improve the writing here. For completion, we additionally report 'no distill' accuracies (with proto-types) of our model in our Common Response above. * RQ 6: It is unclear how mapping module can be standalone evaluated. We indirectly gauge its performance in the overall framework’s accuracy scores. One probable way to evaluate the mapping module could be by observing it’s validation mse (we used a small validation set consisting of concept set samples for early stopping the training of mapping module autoencoder. The mse loss reduced by >50% after few epochs and then stabilized). We welcome relevant suggestions and experiments for better evaluation. --- Rebuttal Comment 1.1: Title: Further response to RQ1 Comment: We added the concept loss vs step values in our response *RQ3 to Reviewer gigg* which further substantiates our claim in *RQ1: Latent Space Changes* by Reviewer FYuP.
Summary: The paper presents the idea of concept distillation from a pretrained teacher to improve a student. They utilize the notion of concept activation vectors (CAV) as concept representations and adapt a pretrained student to sensitize or desensitize a student w.r.t a concept. They apply this idea for two applications to improve a model in ante-hoc fashion: (i) reducing biases in a student towards a concept, and (ii) incorporating prior knowledge Strengths: 1. The paper is reasonably well written 2. The selection of baselines is quite extensive and the proposed method generally performs well 3. The core idea of using CAV as representations to improve a model indeed has wide applicability. This is shown quite well by authors to a good extent Weaknesses: I unfortunately have multiple methodological and experimental issues with the. Please find them detailed in the "Questions" sections. The paper has lot of the hallmarks to be a useful and effective work, but I simply didn't enjoy reading it much as I should have, being constantly riddled with doubts in my mind. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Doubt #1 1. (Methodological, major) I am still not convinced by the use of teacher for CAV estimation. The first supporting argument was in line 170-172 "Direct CAV learning ... as the model may not have sufficiently rich comprehension of the relevant concepts". But even when using teacher representations you map them to the student activation space. So if the mapping $M$ to student activation and back worked perfectly, would it defeat the point of the original argument? The only other supporting argument I could find was an experment on ColorMNIST (Tab. 5). It was nice to see but could it be just due to a specific student architecture. Does this observation generalize to other tasks and more complex students? 2. Can the method be used to improve biases in the original teacher given that you know it has good representation for the concepts? I was assuming that to be able to perform self improvement without a teacher would be more useful. 3. Does the students original activation space get disturbed due to the modifications in its updates or are its layers before the selected intermediate layer fixed? If yes, could it harm the concept distillation loss since the student representations change? 4. (Experimental) Can the model be used to remove many biases simultaneously? This would really enhance the applicability and power of the method. 5. (Minor) Are $M$ and $M^{-1}$ mappings really exact inverses? In that case, isn't $L_{M^{-1}}$ guaranteed to be 0? If yes, then what is the point of $L_{M^{-1}}$? If not, then the notation should be modified. 6. (Minor) I am slightly dissappointed with only one non-MNIST data for debiasing and one for IID task. This is not the biggest concern for me since you do test your method in multiple different ways. And it certainly won't be the deciding factor in my score but nevertheless the impact would feel greater if one of the task was more extensive in its coverage of complex datasets. And I assume you want debiasing as the main task. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors do discuss in the main paper their limitations. They are also upfront about potential societal impact. I am pretty much satisfied with the discussion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are glad the reviewer appreciated our core idea and its wide applicability and about it having "the hallmarks to be a useful and effective work." Our main focus is to explore the feasibility of using explainability ideas like concepts for model improvement in multiple use cases. We will improve the writing to convey this message more clearly in the revised draft and answer the specific questions below: * RQ1: In theory, it is true that if the mapping module ($M$) had a zero-loss, it could make the distillation case same as the case without distillation but this is not observed in our experiments due to two main reasons: 1) We use $M$ to map ONLY the conceptual knowledge as CAV and train it only for concept sets and not the training samples (L179, Algo 1 L2, Fig 3). 2) Due to major differences in the perceived notion of concepts in teacher and student networks and due to a simple $M$ (one upconv and downconv layer; S:52) the mse loss never goes to zero (e.g in colorMNIST it starts from ~11 and converges at ~5 for DiNO teacher to biased student alignment). In our initial experiments, we had tried bigger architectures (resnet18+) for $M$ and found improved mapping losses but decreased student performances. We thank the reviewer for this question and will mention it in the paper. $M$ encodes an expert’s knowledge into the system via the provided concept sets, quantifies this knowledge as CAV via a generalized teacher model trained on large amount of data, and thus helps in inducing it via distillation into the student model. This brings threefold advantages in our system: expert’s intuition, large model’s generality, and efficiency of distillation. The advantage of distillation in colorMNIST is specifically highlighted in Table 5 with 'no-distill' ablation (also shown in additional ablations in RQ2.). Similar observation can be made for BFFHQ and IID (which have different student architectures, datasets biases, and tasks). Specifically, in MNIST variants we used a two layer convolution network (S:L63), while in BFFHQ, we experimented with ResNet18 as the student backbone (S:L65) while in IID we have a custom two-branch network (CGIID's network, L:269, S:L96). In all these cases, we observed distillation to be substantially more effective than the no-distill case. * RQ2: Yes, our method can be used to improve bias in the teacher network. We did the experiment where the distillation was not used, and the CAV's were learned in the same model and reported the result in no distill columns in Table 5 (also L285). This experiment was done with basic concept loss Lc and without the use of proto-types i.e. 'no-distll minus prototypes’. We report an additional ablation 'no-distill plus prototypes' accuracies of our model below. | Dataset | Vanilla | Without teacher | With teacher | |---------------|----------------|----------------|---------------------| | ColorMNIST | 0.1 | 26.97 | 41.83 | | TextureMNIST | 11.23 | 38.72 | 48.82 | | BFFHQ | 56.87 | 59.4 | 63.00 | The experiment shows the effectiveness of having the teacher model and highlights the advantage of the rest of the framework, which occurs due to the threefold advantages described above. * RQ3: Yes, the student's original space might get changed but it does not impact the overall performance. To verify this claim, we did an experiment by updating CAVs in the student after every few training iterations (50, 100, 200, etc.). We observed no significant performance changes due to this. We found that this is because the student's activation space changes gradually relative to the distillation which converges very fast (in less than an epoch i.e 200-500 iterations). * RQ4: Yes, this is possible. Theoretically, we see no reasons our method cannot be used to remove many biases simultaneously. Training for IID (both R-S training experiment, L265) we have two concepts: Reflectance's illumination-invariance and Shading's color-invariance, which are optimized simultaneously in the model. Exploring the same for more than two concepts and in multiple problems is one possible extension that we plan to explore in subsequent extensions of our work. * RQ5: No, they may not be the exact inverses. They were meant to denote the reversed mapping spaces of the encoder and decoder only (L177). Sorry for the confusion and we will change the notation to Encoder E and Decoder D. * RQ6: Instead of opting for multiple concepts for a relatively well-defined problem in classification, we focussed on more complex concepts like illumination-invariance in an ill-defined problem like IID to gauge the utility of the technique in a more real-world scenario. We would like to point our that MNIST datasets have been chosen because of two reasons: 1) Synthetic biases in MNIST datasets are designed to be extreme and hence more challenging than a real world scenario. Furthermore, the introduction of abstract biases in a well-understood dataset helps gauge the impact of the technique in a better way. 2) MNIST allows comparisons with other contemporary works, which all report using the same methodology. Apart from this, we would ike to point that we have introduced a more challenging TextureMNIST dataset and also report results on BFFHQ, which has natural images. In case of IID, the IIW benchmark is the only large scale human annotated dataset (with rest of the datasets like Sintel, being synthetically rendered with known issues in the IID evaluation). As pointed out by the reviewer, we focused on gathering evidence for our method’s utility for multiple tasks and various types of biases rather than multiple datasets for a similar task. If needed, we can include additional results in the revision. We once again thank the reviewer for the appreciation of the work and its utility and hope his/her reservations are addressed sufficiently. --- Rebuttal Comment 1.1: Comment: I want to thank the authors for the rebuttal. The results for ''no-distill" case for all datasets apart from ColorMNIST should certainly added in the main paper/appendix to support your design choices. I still have a few remaining questions: (1) RQ1 - How would you describe a practitioner can empirically assess if the mapping module is functioning as desired? Your response to reviewer FYuP in RQ6 initially seemed fair enough practical choice to me, i.e, "validation mse reduces by a certain amount and the loss stabilizes". But if validation mse drops highly would you then think that $M$ might be causing decrease in student's performance? If yes, then is the optimal functioning of $M$ indicated by the student's performance rather than the validation mse? (2) RQ2 - If I understand correctly, "no-distill" results don't use a teacher and directly debias student architecture. Did you also try to debias the teacher architectures? The reason I am making this distinction is because the teacher architectures are assumed to have richer capacity to encode conceptual knowledge. (3) RQ3 - From what I understood from Alg1, $M$ and $v_C^l$ are learnt before student updates and fixed throughout student training. Please correct me if I am mistaken in this understanding (it is important!). If this is indeed true I fail to see how your experiment verified your claim in RQ3? The experiment you describe seems a more sensible way of training, compared to keeping $v_C^l$ fixed throughout student updates. What I am interested to see instead is the value of $L_C(x)$ with the new student CAV from the end of training. Is the loss gradient still orthogonal to the new CAV while it was trained the whole time with old CAV? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for his comments and will address them one by one below: * **RQ1**: We concur with the observation that validation MSE is not the best way of evaluating the mapping module’s performance, as we wrote to FYuP: “We indirectly gauge its performance in the overall framework’s accuracy scores.” Additionally, the reduced student sensitivity can be gauged by our reduced concept loss (given in RQ3 below) as well as TCAV scores [Kim et al.] reported below for vanilla network: "Student (before training)" and "Student (after training)" (essentially "Ours" in Table 3 and Table 5) ### TCAV Scores | Dataset | Student (before training) | Student (after training) | |-----------|--------------------------|-------------------------| | ColorMNIST| 0.52 | 0.21 | | BFFHQ | 0.78 | 0.13 | The evident reduction in TCAV scores (particularly for the concepts of *color* in ColorMNIST and the concept of *age* in BFFHQ) subsequent to training signifies our method's efficacy in mitigating bias within the models which is also demonstrated by improved accuracies in the paper (Table 3, 5). * **RQ2**: We operate under the scenario of leveraging a pre-trained teacher and assume no access to its training regime. We assume access only to the student's training dataset/losses/etc. Debiasing the teacher is not in scope as a result. Better teacher models will become available in the future, and we should be able to take advantage of them right away. We would like to clarify the term “richer” used with respect to the teacher. Teacher models like DINO are large vision models that are trained on hundreds of millions of images and have the capacity to be used in different applications and settings. That’s what we meant effectively in L:170-173 and in Supp L:30. We will clarify this point in the paper to avoid any confusion.
Summary: This paper aims to sensitize or desensitize a (smaller) student model with respect to user-provided high-level concepts, by leveraging a (larger) teacher model. Specifically, a supervised mapping model learns a bijection between some chosen latent space in the teacher model and the student model. Then, CAVs extracted from teacher model is mapped to the student model through the mapping model. Finally, the student model is trained to sensitize / desensitize with respect to the mapped CAVs by minimizing / maximizing the concept sensitivity score. Strengths: * The ideas in this paper are quite creative and have plenty of potential if explored properly. * Working with new real-world datasets (i.e. IID experiment) in the interpretability domain is always welcomed, which makes interpretability methods more application grounded. Weaknesses: Overall my impression is the paper explores too many ideas at once and thus unfortunately fails to cover each one in detail. Design choices are not well-justified and phenomena are not well-understood. * I find this paper extremely hard to read. Ideas are not elaborated clearly when first introduced and pieces of information are scattered all over the paper. It takes about 3x the time to read this draft compared to other ones. Perhaps the writing needs to be cleaned up. * Example 1: In P8 the ablation for ColorMNIST is placed after IID experiments which is confusing. * The proposed method is too complex and has too many claims that are not well substantiated. * Claims: * Optimizing for concept distillation loss (Eq 2) can effectively sensitize / desensitize * Prototype-based loss captures sensitivity better at a global level and "facilitates the use of any intermediate layer by serving as pseudo-class label"(P5L168) which frankly I have no idea what this means. * Large models learns CAV better than smaller ones. (the entire premise for adopting a teacher-student framework) * The compared baselines may not necessary be fair since the proposed method is a fully-supervised one that requires training of entire models while baselines are either zero-shot or few-shot. Should compare with fairness/debiasing methods that requires full training of the entire model. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: * For each of the claims, there should be dedicated experiments to verify the claims, instead of bunching everything together and comparing this menagerie method with weaker 0-shot/few-shot baselines. Please justify the claims explicitly. * A proper ablation study on the design choices is a necessary component for such complex method. How does each component affect the result? * For the concept distillation loss, the entire student model is optimized for Eq (2) but the (mapped) CAV is kept constant. If layers prior to the latent space where the CAV lies is modified, CAV no longer represents the latent space because the input representation has changed. For example, suppose some concept happens to lie in the first dimension of the CAV latent space. If we optimize the entire model, the model could switch the concept and encode it in the second dimension instead. Thus, does it not make sense to only optimize for the model after the CAV layer? * There are no descriptions of what each entry in Table 6 represents. P9L288 states to "See supplementary for detailed explanations" but I haven't been able to find where them. Can the authors point me to the exact section for the ablation study? * Why exactly is $M^{-1}$ needed? * Can the authors elaborate on why concept distillation from a completely biased teacher AND the teacher shares the same architecture as the student model might still improve? The result is quite surprising considering the benefits of distillation has disappeared in this setting. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: There is a small section in P9L299-302. The only limitation stated is "The dependence on the teacher for conceptual knowledge could be another drawback of our method as with all distillation frameworks". One limitation might be when CAV extracted from the teacher model fails to translate to the student model. When, where, and why does this happen? Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the appreciation of our idea’s potential and novelty including demonstration on real-world dataset on a reconstruction problem. The primary goal of this work is to show that explainability ideas like concepts can be used in a loop to improve the models. We believe it is an important idea with many extensions as observed by all reviewers. We apologise if the writing is confusing and will improve it if given a chance. We now address the concerns: * W1: We placed all experiments (the evidence) first followed with ablations at the end as done by most. We can include references earlier. * W2: We use concept distillation to desensitise a model to user desired concepts effectively. We introduced a novel concept loss and a prototype-based extension of CAVs. The loss can be used directly in the same model (no distil experiment in Tab 5) or using a teacher which has a richer knowledge of concepts. The prototypes transfer labels for loss computation in the chosen intermediate layers. These components, in our view, combine towards the core idea. * W2.1: Effectiveness of concept distillation on different datasets and different tasks are shown. Improvement in accuracies of models in biased settings is substantial (Tables 1 column "Ours+L" vs Table 3 column "Ours"). The use of concepts leads to better generalization (Table 2, Figure 6). Finally, we demonstrated effectiveness in a challenging real world IID problem where none of the existing interpretability based model improvement methods could be applied ( Table 4, Figure 7) * W2.2 Prototypes have been used widely in the existing literature (L127, [Xue et al, Keswani et al]) and have proved to capture class-wide characteristics well (L126, 127). They are a type of global method (L31 definition of proto-types, L64 for local vs global) by their very definition (as they are based on feature clusterings). Original CAV based sensitivity representations are limited to the last layer due to their loss based definition (explained in L131-138) and hence cannot be used to estimate the sensitivity to an intermediate layer. To extend CAV sensitivity to any layer $l$, we need a way to calculate model's sensitivity to it, which requires a loss to be calculated in layer $l$ (instead of the final layer where GT annotations are available). We circumvent this issue by estimating prototypes of the GT annotations at any layer (by forward passing and clustering the feature vectors based on labels in the chosen intermediate layer) (L168). This is a natural extension of prototype usage in literature. * W2.3: We experimented with various architectures for teacher selection. Results in supplementary L27, L34 show larger model perform better. Please see no-distill in Table 5 as well as additional ablations in the common rebuttal and also in our response to Reviewer Gigg. * W3: *Zero-shot vs few-shot* We are sorry we failed to clearly convey the experimental settings to the reviewer. Our method is zero-shot and not supervised. Zero-shot vs few shot are defined wrt training data (L195): zero-shot means the model has never seen the distribution of test samples (Xian et al.), while in few-shot, the model has access to some examples from the distribution of the test set (Wang et al.). Our method works with abstract concept sets (different than any test sample) and is essentially zero-shot. Our comparison with other zero-shot interpretability methods like CDEP, RRR, EG (Table 1) is justified. Our method can also take advantage of few-shot examples if available as shown in results on BFFHQ (Table 3). In this experiment, our concept set comprises old/young men/women extracted from the test set (thus making it a few shot method as it has seen samples from test-distribution). We compare with few-shot methods EnD and DFA (Table 3) (L114-L117). This also makes our comparison fair in this category as well. *Full training of the entire model:* All of the above methods train the entire model with introduced losses/rules/samples. CDEP, RRR, EG have their loss terms added to the models like us while EnD and DFA train the entire model using their proposed algorithms using the ood samples as well. * RQ1: We hope we answered all the doubts of the reviewer in our responses above. We will try to rewrite better in the final version. * RQ2 and RQ4: We explored the impact of each design choice in several experiments in the ablations section table as well as in supplementary (Table 5, Table 6, Figure 8, L 278 and S:L34-45). We have provided a detailed explanation for the ablations of Table 6 in the common rebuttal above. We apologise this has not been explained more clearly in the paper. * RQ3: We will change the notation as mentioned in the common rebuttal too. * RQ5: When 100% biased teacher is used, the student gets 23% accuracy on ColorMNIST which is still low compared to 50.93% accuracy when a Dino ViTB/8 is used. * RL1: Theoretically, this can happen if the student's space for the chosen bottleneck layer does not have sufficient capacity to represent the concepts (e.g., Teacher's space is huge while students is very small for chosen bottleneck). But this is unlikely to occur because the student classifier's activation space is expected to have the capacity to encode the detailed images from iid ood samples and perform classification or the task at hand. Concepts in general are supposed to be more abstract compared to these detailed images and should be encoded practically in the student space. This is what we found in our experiments. We once again thank the reviewer for incisive comments and hope our explanation has made the reviewer positive about our work. --- Rebuttal Comment 1.1: Comment: Many thanks to the authors for the detailed reply. I guess how easily understandable a paper may be quite subjective, although I still believe that the entire paper could be structured better to illustrate the key points. That being said, let's get into the support for each of the claims. W2.1: I now understand the experiment setting better. Can the authors include the results for CAV calculated only using the student model in Table 1 as well (I believe the no-distill results in the author's rebuttal)? It is quite relevant for justifying the usage of a teacher-student framework. W2.2: 1. I don't quite get why the original CAV formulation could only be applied on the final layer. Even the original TCAV paper experimented with TCAV scores calculated in multiple intermediate layers. One simply back-propagates the input gradients to a specific layer of interested and calculate the sensitivity there instead. 2. Prototypes being a popular technique does not directly justifies its usage. I don't believe this type of prototype usage have been explored before in past literature (for CAVs) and would probably require an entire work to examine whether this design choice makes sense. The authors could also point me to the section where they explored the design choice of adding the prototype against other alternatives, as I could not find it. W2.3: Can the authors provide intuition for why a CAV learned by a teacher and then passed to the student via a learned Autoencoder would be better than directly learning the CAV in the student? My understanding is that any additional "rich" information the teacher model contains must be discarded in the process of the Autoencoder mapping. Therefore, any additional discriminative features used for learning the CAV in the teacher model would not be able to translate (through the Autoencoder) into the student model. The information that could be able to be translate (through the Autoencoder) into the student model are already existing in the student model. Thus, there is no information advantage if we learn CAV from the teacher, if the CAV needs to go through the (compressive) Autoencoder. RQ3: The authors didn't respond to whether only finetuning the layers after the selected intermediate layer could circumvent the changing latent space issue (which I believe would and should be the preferred method for debiasing). RL1: One solution is to first check if the concept is translatable from the teacher to the student first is an important step to include in the algorithm. This is an analogy of how TCAV would check the t-test statistics to see if a concept could be represented in a latent space. Confirming whether the distillation is feasible before distilling should be mandatory. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the comments. We are glad many design options we explored while developing the idea and some new ones are being brought out in this discussion. This will improve the content and writing of the work. We will now discuss the comments one by one. **W2.1:** Yes, we will include the mentioned results in Table 1. We additionally show why CAV calculation in teacher is better via an experimentation in RL1 below. **W2.2.1:** The original CAV sensitivity as introduced by Kim et al. can be calculated at any intermediate layer $l$, but they only measured the sensitivity of the final layer prediction wrt activations in $l$. They were interested only in the question: if any changes in activations are done in $l$ what is its *effect on the final layer prediction*? This is different from the question we ask: if any changes in activations are done in $l$ what is its *effect on any other layer*? We calculate the *sensitivity of $l$ prediction* wrt activations in any layer. To this end, we use proto-types that act as pseudo GT labels. One further question could be: Why use intermediate layer sensitivity instead of the last layer? Ans: TCAV by Kim et al. was designed to be an interpretability method to check for model sensitivity to certain concepts for classification problems. They estimated the sensitivity using the "final layer's loss/logit". We aim to finetune the model by (de)sensitizing it towards a given concept that could exist in any layer (S:22-26, [1]) for which we use prototypes based loss ($L_p$) in that layer. **W2.2.2:** The answer is linked to our reply to *W2.2.1* above. We believe the use of proto-types with CAV is a novel contribution (L77). As discussed above, this was necessary to compute intermediate layer loss for CAV sensitivity estimation at *any* layer. Defining loss in final layers is straightforward due to the availability of ground truth (GT), which are not available for the intermediate layers. Prototypes were used as pseudo-GT class labels in various classification scenarios [2, 3, 4, 5, 6, 7, 8, etc. (referred earlier)]. We build on that in our experiments. In our opinion, clustering based proto-types [2, 3, 4, 5] presented the simplest way to achieve the same. In our initial experiments, we tried using dimensionality reduction techniques, global pooling, etc., to reduce class activations to a scalar as a substitute for CAV logit (as proposed by Kim et al.). Those yielded unsatisfactory performance. The advantage of using intermediate proto-types vs the last layer's logits for the concept (de)sensitization is shown in Table 6 and discussed in the common rebuttal (ablations). [1] Akula, A., Wang, S., & Zhu, S. C. (2020). Cocox: Generating conceptual and counterfactual explanations via fault-lines. AAAI [2] Caron, M., Bojanowski, P., Joulin, A., & Douze, M. (2018). Deep clustering for unsupervised learning of visual features. ECCV [3] Yang L, Huang B, Guo S, Lin Y, Zhao T. A Small-Sample Text Classification Model Based on Pseudo-Label Fusion Clustering Algorithm. Applied Sciences. 2023 [4] Li, J., Zhou, P., Xiong, C., & Hoi, S. C. (2020). Prototypical contrastive learning of unsupervised representations. arXiv. [5] Niu, C., Shan, H., & Wang, G. (2022). Spice: Semantic pseudo-labeling for image clustering. IEEE Transactions on Image Processing. [6] Nassar, I., Hayat, M., Abbasnejad, E., Rezatofighi, H., & Haffari, G. (2023). PROTOCON: Pseudo-label Refinement via Online Clustering and Prototypical Consistency. CVPR. [7] Tanwisuth, K., Fan, X., Zheng, H., Zhang, S., Zhang, H., Chen, B., & Zhou, M. (2021). A prototype-oriented framework for unsupervised domain adaptation. NeurIPS. [8] Li Y, Guo L, Ge Y. Pseudo Labels for Unsupervised Domain Adaptation: A Review. Electronics. 2023. **W2.3:** This is essentially the case of ideal mapping with a loss of 0. We train the mapping module *only for concept sets* (and not the training dataset), as discussed in detail in the response RQ1 to Reviewer gigg. We also show an experiment in RL1 below (order of cosine similarity scores) which shows the same. We will mention this case in the paper/supplementary to avoid confusion and enhance comprehension. Title: Response to Reviewer oT4y
Rebuttal 1: Rebuttal: We thank reviewers for valuable suggestions and feedback. We are glad they acknowledged the potential of our work. Our primary goal is to show explainability ideas like concepts can be used in a loop to improve the models. This is an important idea with many future directions as observed by reviewers. The summary of our pipeline (Fig 3 and Alg 1): * Step 0: Mapping teacher space to student space by training the autoencoder. * Step 1: CAV Learning using distilled teacher outputs to define concepts. * Step 2: Training the main task with Concept Distillation. The top part of Fig 3 is not used * Step 3: Testing where the trained model is applied. Only the bottom half and classification head are used. We address common concerns here. Detailed responses are included against reviews. **Mapping module Notation & Training:** * M and M-1 notation are meant to show spaces of encoder and decoder only (L177). They are not exact inverses. We will change it to Encoder E and Decoder D. * We use the mapping module ONLY for the concepts while training for CAV. It has no role in later steps like concept distillation (L179, Algo 1 L2, Fig 3). **Ablations:** Several ablations to justify our design choices (Tabs 5, 6; Fig 8). We tried our concept loss formulation in no distillation setting wherein the CAVs are learned in the same student model (table 5, no distill). The improvement in accuracy goes from 0% in base vanilla network trained with Lo to 9.96% when Lc (eq 2, L147) is added during the training. Sensitivity used for Lc is calculated as gradient of loss (eq 2 L147) in our experiments. We tried by calculating sensitivity as the gradient of logit (second column of table 6, "logit") and found the former works better. We also ablate using a variant where we compute our concept distillation loss using last layers loss directly (the way Kim et al[28] calculated sensitivity) instead of using our proposed proto-type based loss, i.e. calculation of Lc, using Lo instead of Lp described in L155 (Lo column). This shows the effectiveness of using mean class representations (proto-types). We ablate using a fixed set of prototypes instead of varying them (Lp, fixed proto) and found the performance to reach 40.02%, by varying the proto-types according to L164 we achieve 41.83% accuracy which is a slight improvement over fixed proto-types case. Finally, we add a local loss (RRR loss term) and achieve 50.93% accuracy on colorMNIST. We also show results when i) KNN K in proto-type calculation is varied and found k=7 to work best (Figure 8 Right) ii) number of images (# imgs) in the concept set are varied and we observe a peak in # imgs = 150 which is chosen as the KNN K and # imgs in our experiments (L189, S:61, S:74) We apologize for the unclear ablations and will add them to the paper to make them clear. **No Distillation:** We showed an ablation in main paper (Tab 5 L285)) where the distillation was not used and the CAV's were learned in the same model (without proto-types) and reported the result in 'no distill’ column in Tab 5 (L285) which is 9.96%. We additionally report 'no distill’ accuracies (with proto-types) of our model below: | Dataset | Vanilla | Without teacher | With teacher | |---------------|----------------|----------------|---------------------| | ColorMNIST | 0.1 | 26.97 | 41.83 | | TextureMNIST | 11.23 | 38.72 | 48.82 | | BFFHQ | 56.87 | 59.4 | 63.00 | These results are much better than Vanilla network even without teacher but much better with it. **Latent Space:** The student's original space might get changed with training updates but it does not impact the overall performance. To verify this claim, we did experiments by updating CAVs in the student after every few training iterations (50, 100, 200, etc.). We observed no significant performance changes due to this. This may be because the student's activation space changes gradually relative to the distillation which converges very fast (in less than an epoch i.e 200-500 iterations). **Datasets:** Instead of opting for multiple concepts for a relatively well-defined problem in classification, we opted to focus on more complex concepts like illumination-invariance in an ill-defined problem like IID to gauge the utility of technique in a more real-world scenario. MNIST datasets have been chosen because of two reasons: 1) Synthetic biases in MNIST datasets are designed to be extreme and hence more challenging than a real world scenario. Introduction of abstract biases in a well-understood dataset helps gauge the impact of the technique in a better way. 2) MNIST allows comparisons with other contemporary works, which all report using the same methodology. Apart from this we introduced a more challenging TextureMNIST dataset and also report results on BFFHQ which has natural images. In case of IID, IIW benchmark is the only large scale human annotated dataset (with rest of the datasets like Sintel being synthetically rendered with known issues in the IID evaluation). We will be making the above more clear in the paper. Additionally, we will improvise the figures and pipeline description. **Common Citations:** * M. Xue, Q. Huang, H. Zhang, L. Cheng, J. Song, M. Wu, and M. Song. Protopformer: Concentrating on pro-totypical parts in vision transformers for interpretable image recognition. * M. Keswani, S. Ramakrishnan, N. Reddy, and V. N. Balasubramanian. Proto2proto: Can you recognize the car, the way i do? * Xian, Y., Lampert, C. H., Schiele, B., & Akata, Z. (2018). Zero-shot learning—a comprehensive evaluation of the good, the bad and the ugly. * Wang, Y., Yao, Q., Kwok, J. T., & Ni, L. M. (2020). Generalizing from a few examples: A survey on few-shot learning. (Note) Notations for rebuttal: S:LXX denotes supplementary Line XX while LXX denotes main paper lines XX.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
LICO: Explainable Models with Language-Image COnsistency
Accept (poster)
Summary: This paper proposed LICO, which leverages the textual and semantic knowledge learned by large language models to guide latent image features. By matching relationships among images with KL-divergence globally, and distances between specific feature maps and prompt tokens with OT, the feature space of image is aligned with the prompt tokens in LLM. Strengths: 1. It provides a novel idea that uses the semantic knowledge of LLM to guide the feature representations in the image classification model. Since the feature expression ability of the model is improved, both the classification performance and interpretation maps generated by XAI methods are better. 2. Thorough experiments are conducted on multiple datasets and show the effectiveness of the proposed method. 3. The paper is clearly written and easy to follow. Weaknesses: 1. My main concern is that the model trained with the help of LLM is not the original model, so the improved interpretation is for the later model which has a better performance. So I'm unsure whether this can be regarded as "enhancing existing visual interpretation methods." since the interpreted model is already changed. It's more like LICO enhances the classification model so that the XAI results are improved. LICO is not an XAI method itself, but it helps the model better learn latent features by introducing knowledge from LLM. 2. Some small problems: - What’s the transport cost used in OT? - Since fixed promotes are in the original CLIP but learnable X1 to Xm-1 are added in the language model input, can the model work well with frozen parameters and without fine-tuning? How are the learnable prompts initialized? - Missing introduction in Sec.1 and Sec.2 for perturbation-based explanation methods like RISE, which is also compared in the experiments. - The legend for g and f in Figure 2 are reversed. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: see the "Weaknesses" Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your thorough summary, encouraging feedback, and constructive suggestions. **Q1: Concerns that the model trained with LICO is not the original model** Your understanding is correct, and we completely agree with you. The proposed LICO is not strictly an XAI method but offers a more explainable classification model from scratch while maintaining its discriminative capacity. In other words, LICO acts as a *plug-and-play* training strategy, where the resulting models are enhanced with semantic knowledge of LLM through alignments of global manifolds and local features. We will make it clear in the future version. **Q2: On the transport cost used in OT** We would like to clarify that the transport cost used in OT is cosine distance, which will be clarified in the future version. **Q3: On the learnable prompt initialization and model performance with frozen parameters** Thank you for your insightful comment. We would like to note that the learnable prompts in our LICO are randomly initialized according to Gaussian distribution, the same as the setting in [1]. To evaluate the model performance with frozen parameters, we then conducted experiments on CIFAR-10 and ImageNet under two settings of frozen parameters: (1) frozen random initialization (Random) and (2) fixed form of 'a photo of a [CLS]' (Fixed). The Insertion and Deletion values are obtained by Grad-CAM + LICO. In **Table R1**, we can see that the frozen random parameters cannot enable the models to achieve higher performances of accuracy, insertion, and deletion; the reason is that the well-trained CLIP text encoder is capable of sensing the human-understandable phrases and sentences while the random prompts lead to the difficulty in image-text alignment and yield inaccurate semantic representations. However, the form of 'a photo of a [CLS]' performs better than frozen random parameters because this meaningful prompt is more consistent with the input of the original CLIP so that the generated representations can be easily aligned with image representations. **Table R1**: Evaluations on frozen parameters of prompts | ImageNet | Top-1 $\uparrow$ | Insertion $\uparrow$ | Deletion $\downarrow$ | - | CIFAR-10 | Full, Accuracy $\uparrow$ | 4000, Accuracy $\uparrow$ | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | Random | 75.88 | 53.3 | 17.8 | - | Random | 94.9 | 79.5 | | Fixed | 76.20 | 55.2 | 17.4 | - | Fixed | 95.4 | 80.7 | | Learnable (ours) | **76.27** | **57.1** | **15.1** | - | Learnable (ours) | **95.8** | **81.5** | > [1] Guangyi Chen, Weiran Yao, Xiangchen Song, Xinyue Li, Yongming Rao, and Kun Zhang. PLOT: Prompt learning with optimal transport for vision-language models. In The Eleventh International Conference on Learning Representations, 2023. > **Q4: On the missing introduction of perturbation-based methods and reversed legend** Thank you for your careful reading and pointing out these issues. We will certainly add the introduction of RISE in Secs. 1 and 2 and fix the legend error in Figure 2 in the future version.
Summary: This paper introduces LICO, a model that aligns visual encoder to language features. This model incorporates a frozen text encoder, a trainable image encoder, a classification loss, a manifold matching loss and an optimal transport loss. Experiments shows improvements over existing interpretation methods. Strengths: 1. This paper is well motivated, and the method is clearly stated in text as well as figures. 2. Quantitative results are better than baselines. 3. The experiments are comprehensive in terms of the number of datasets and baselines to compare to. Weaknesses: 1. LICO obtains class label text embeddings via a text encoder pretrained with internet-scale image-text pairs, thus bringing additional information. Further, the image encoder is trainable whilst in baseline methods the image encoder are all frozen. These two aspects add up to an unfair comparison to other baselines. 1. The optimal transform loss is motivated by aligning partial regions to prompt tokens, but there's no quantitative/qualitative experiments analyzing that specific effect. 2. The architectures used in this work are generally outdated, and exact version of the architecture (ResNet18/50) are less performant ones. 3. In Figure 3, the qualitative results from LICO are not better than baselines. 4. In Table 2, the improvements are not significant. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How would authors compare the manifold loss to a cross-entropy with softmax temperature > 1? 2. What is the necessity of the text encoder being from CLIP? Would a pure text embedding model (word2vec, bert, etc) work? 3. Would a model trained with LICO work for words that are not included in the training class label set? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors discuss about their limitation on L303, which is not a true limitation compared to some in the Weaknesses I listed. Further, this limitation of "some training overhead" is not discussed in the paper. In concept, such an MLP would only bring insiginificant training cost when compared to the image encoder. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for appreciating the motivation and comprehensive experiments. **Q1: On comparison fairness** We apologize for not making it clear that our comparison to baselines is fair. - **Additional textual information**. Regarding the textual information of LLM as real-world semantic space is our assumption, motivating us to enhance target models. For downstream tasks, previous interpretation method did NOT consider this critical point, so their results are biased to limited semantic space. - **Learned encoders of LICO and baseline methods**. LICO and baseline encoders are trained on the same dataset from *scratch* and used the same CE loss. Even for Grad-CAM, their target encoders are trained similarly. **Q2: Analysis of aligning partial regions to prompt tokens** This question highlights an essential aspect of our research. To the best of our knowledge, LICO is the first work to align specific tokens with correlated feature maps for better interpretation. The optimal transport plan $T^\* \in \mathbb{R}^{N \times M}$ of OT implies the relationship between $M$ prompt tokens and $N$ feature maps. The corresponding row of $T^\*$ can be treated as the weights of corresponding feature maps. However, there needs further exploration on whether the weighted sum of feature maps can represent the heatmap of specific tokens, which will be studied in the future. Notably, our ablation study in Table 5 has verified the effectiveness of OT, and MF + OT outperforms MF alone both in classification performance and quantitative interpretation. **Q3: Concerns on the simple architectures** First, we followed prior interpretation methods to use these architectures for a fair comparison. Second, LICO's design is model-agnostic and mainly focused on language-consistent interpretation, which can be flexibly extended to advanced architectures. **Q4: Results in Figure 3 and Table 2** In Figure 3, although LICO and Original cover similar rough regions, LICO helps to localize more accurate details. In addition, the corresponding insertion and deletion curves quantitatively illustrate that the highlighted regions of LICO are more sensitive to model decisions. In Table 2, unlike previous methods that compromise model discriminative performance, LICO is the first method that *simultaneously* improves the interpretation and classification. **Q5: Manifold loss vs cross-entropy with Softmax temperature > 1** This question is interesting and instructive. First, for mathemtics, $L_{\text{KL}}(A^{G}||A^{F}) = - L_{\text{CE}}(A^{G}, A^{F}) + H(A^{G})$, where $H()$ denotes entropy. When the entropy $H(A^{G})$ is a constant value, KL-divergence equals CE loss. When temperature > 1, the entropy term becomes larger, resulting in the inaccurate prompt manifold. If the entropy $H(A^{G})$ is small enough, $A^{G}$ is one-hot, so we can regard KL-divergence as an equivalent of cross-entropy. Hence, MF and CE are similar to some extent. Second, the MF of LICO is effective in cross-modal alignment. However, CE often works with categorical one-hot labels and is inferior in matching two distributions. **Q6: CLIP text encoder vs pure text embedding model** Both Word2Vec (W2V) and BERT can replace CLIP text encoder in our LICO framework. However, they may be inferior in acting as language guidance in LICO: (1) CLIP text encoder was trained with huge amounts of image-text pairs, and the advanced architecture ViT-B/32 has stronger representation ability than W2V; (2) Even though BERT excels at representing language data, latent representation of BERT may not align with target image representations well; (3) Recent studies on prompt learning like CoCoOp has demonstrated the effectiveness of CLIP pre-trained text encoders for downstream tasks. In **Table R1**, we conducted the experiments using W2V as a text encoder, mapping texts into 512-D. W2V: using fixed word embeddings and no context tokens for OT loss. W2V-P: replacing class tokens in the prompts, the context tokens are randomly initialized as Gaussian. We can see that W2V performed comparably to the None encoder baseline. W2V-P also fails, where the fixed random context tokens mislead the image features. In contrast, the CLIP outperforms both of them. The fixed W2V embedding vectors struggled to correlate well with target image features. **Table R1**: Comparison between CLIP text encoder and W2V on CIFAR-10 | Encoder | CLIP | W2V | W2V-P | None | | :---: | :---: | :---: | :---: | :---: | | Full | **95.8** | 95.6 | 94.9 | 95.6 | | 4000 | **81.5** | 81.0 | 80.2 | 80.9 | **Q7: Would LICO work for words not included in the training?** This comment motivates us to consider the robustness of LICO. We conducted additional experiments of zero-shot inference: training on ImageNet and inference on CIFAR-10, Flowers, and FGVC Aircraft test sets. The inference probability is the same as that image-text matching used in CLIP. At the inference stage, we replace the class tokens of ImageNet with those of test sets, and the trained context tokens are randomly selected for constructing new prompts. In **Table R2**, we report the mean accuracy of five independent tests. Compared with CLIP trained on vast amounts of image-text pairs, our LICO obtains some performance drop on CIFAR-10 and Flowers and relatively small drop on FGVC Aircraft, because CLIP training set also does not contain large amounts of aircraft images and texts. We also fixed the pre-trained encoder and fine-tune a classification head named LICO-F, which improves the zero-shot performances. In summary, our LICO can work and achieve comparable results. **Table R2**: Comparisons of zero-shot classification accuracy (%) | Method | CIFAR-10 | Flowers | FGVC Aircraft | | :---: | :---: | :---: | :---: | | CLIP | **75.6** | **65.9** | **19.3** | | LICO | 63.8 | 55.7 | 17.2 | | LICO-F | 70.4 | 61.1 | 18.7 | **Q8: On the limitation** Thanks. We will discuss the Weakness you listed in the future version. --- Rebuttal Comment 1.1: Comment: I thank the authors for providing this informative rebuttals. Some of them have addressed my concerns, while there are a few I still hold further questions for: - On the architecture. I agree with the statement of "LICO's design is model-agnostic" in the sense of computation, however when it comes to actual results ResNet and ViT would function differently and such difference would sometimes be reflected in very different attention maps. One example is the 2nd row, Figure 4 in [1]. - On pure text embedding model. Word2Vec is a relatively old method because 1) it was not trained on a large-scale dataset in terms of today's standard, and 2) the model may not have sufficient capacity. In some way, the table R1 in rebuttal can be explained as the deficiency of W2V instead of text-only v.s. image-text alignment. Would be more interesting to learn about the performance when the text embedding comes from a stronger model, i.e. comparable to CLIP in terms of model capacity and training data scale. A thorough discussion regarding this point would actually help to improve the quality of this work. [1] https://arxiv.org/pdf/2207.09684.pdf --- Reply to Comment 1.1.1: Title: Concerns about Different Architectures and Pure Text Embedding Models Comment: We appreciate your further feedback. We’d like to address your further questions point-by-point. **On the architecture.** (i) We agree with you that CNN and Transformers are different in visualizing attention maps, but they are similar in incorporating with LICO due to it only depends on the latent representations, i.e., the representations before the final classification head. (ii) LICO does not affect the calculation of attention maps (the last self-attention) in ViTs. For image encoder of ViTs, LICO can also guide the representations of class tokens through proposed $L_{\text{OT}}$ and $L_{\text{MF}}$. (iii) In this paper, we follow the previous studies of interpretation, which focused on interpretation methods based on simple CNN backbones. **LICO can effectively overcome their common difficulty of improving interpretability and classification performance simultaneously**. Hence, we will treat incorporating LICO with ViTs as our future work in that there needs more effort to investigate how to obtain more explainable self-attention of ViTs, suitable quantitative metrics for interpreting ViTs, whether there exists trade-off between interpretability and classification performance, etc. (iv) Thanks for recommending the publication [1], which is a wonderful work that utilized partial distance correlation (DC) to measure similarity of different networks. The DC is helpful for generating improved attention maps via conditioning on another network, which benefits from its beautiful properties of end-to-end optimization and measuring feature spaces of different dimensions. This work inspires us to consider the relationships among features of different models and to further facilitate interpretation studies. - [1] On the Versatile Uses of Partial Distance Correlation in Deep Learning, ECCV 2022. **On pure text embedding model.** In addition to Word2Vec (W2V), we further applied pre-trianed BERT to testify the effectiveness of LICO: (i) The pure language BERT [1], (ii) The text encoder of vision-language BERT, i.e., BERT-ALIGN, ALIGN [2] is also a framework that aligns image and language features in latent space, which differs from CLIP in training with a noisy image-text dataset that is larger than that in CLIP. For both BERT [1] and BERT-ALIGN [2], we take learnable class-specific prompts as the inputs of text encoders. Based on the Table R1, we further provide the Table R3 as follows. We can see that BERT [1] surpassed W2V and W2V-P due to the generalizability of stronger pre-trained model. However, BERT still cannot achieve better performances than CLIP. Furthermore, BERT-ALIGN performs competitive and even better than CLIP, which attributes to its larger training set with more noisy image-text pairs. Consequently, from the results in Table R3 and this paper, we conclude that LICO works better with those vision-language pre-trained text encoders. The pure text encoders like W2V, even the pre-trained BERT with frozen parameters, are inferior in image-text alignment in LICO due the pre-trained parameters are not sensitive to visual features. This deficiency may be addressed by utilizing some transfer learning and domain adaptation tricks. **Table R3**: Comparison of different text encoders on CIFAR-10 | Encoder | CLIP | W2V | W2V-P | BERT[1] | BERT-ALIGN[2] | None | | :--- | :---: | :---: | :---: | :---: | :---: | :---: | | Full | $\textbf{95.8}$ | 95.6 | 94.9 | 95.7 | $\textbf{95.8}$ | 95.6 | | 4000 | 81.5 | 81.0 | 80.2 | 81.3 | $\textbf{81.7}$ | 80.9 | - [1] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, ACL 2019. - [2] Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision, ICML 2021. Based on your valuable comments and suggestions, in our future work, we will comprehensively discuss different architectures and pre-trained models and try to unify the interpretation of CNNs and ViTs. --- Reply to Comment 1.1.2: Title: Experimental results with Transformer-based model Comment: To address the major concern on the backbone, we further trained a ViT-Base-16 network on ImageNet-1k dataset, and provided accuracy, insertion, and deletion in **Table R4**. We calculated the $L_{\text{MF}}$ and $L_{\text{OT}}$ between language tokens and representations of patch tokens and class token. For attention maps, we applied Grad-CAM in LICO-trained ViT-Base-16 through calculating gradients from outputs to the last attention layer of class token. **Table R4** further confirms that the transformer model with LICO not only **performs better** but also **gains better interpretability** than the one without LICO, which is in line with the finding for CNN-based backbone. In our future work, we will further develop more explainable decision clues for ViT by incorporating knowledge of LLMs into self-attention. **Table** **R4** Evaluation on ImageNet-1k using ViT-Base-16 | ViT-Base-16 | Accuracy$\uparrow$ | Insertion$\uparrow$ | Deletion$\downarrow$ | | --- | --- | --- | --- | | w/o LICO | 77.9 | 55.2 | 14.4 | | w/ LICO | **78.2** | **56.0** | **13.8** | --- Reply to Comment 1.1.3: Title: We are looking forward to your feedback Comment: Dear reviewer V7Pm, Thanks again for all of your constructive suggestions, which have helped us improve the quality and clarity of the paper! Since the author-reviewer discussion period will end soon in a few hours, we appreciate it if you take the time to read our further response and give us some feedback. Per your two major concerns, **we have demonstrated the effectiveness of ViT and BERT in LICO framework**, and most importantly, **ViT with LICO also obtains improved classification performance and interpretability**. Please don't hesitate to let us know if there are any additional clarifications or experiments that we can offer. If our response resolves your concerns, we kindly ask you to consider raising the rating of our work. Thanks for your time and efforts! Best, Authors of Paper 6636
Summary: Most visualization interpretation methods based on saliency information often generate inaccurate saliency maps due to the limited discriminative information provided by one-hot labels. The manuscript proposes a language-image consistency model (LICO) to address this challenge. LICO utilizes a large-scale vision-language model CLIP to improve existing CAM-based vision explainable methods such as Grad-CAM. The authors assume that numerous image-text pairs used in CLIP can encode rich semantic information that can be aligned with the latent space of image domain, which establishes global manifold structure alignment and assigns local feature maps with class-specific prompts to generate more accurate saliency maps. The paper is well-organized and motivated, and the idea of leveraging language information from large VLMs is intuitive and effective. Experiments including deletion and insertion tests, sanity checks and classification demonstrate that LICO can achieve improvements over existing interpretation methods, resulting in more explainable attention maps. Strengths: (1) The paper is well-organized and easy to understand. (2) The motivation behind leveraging language information from large VLMs is clear and effective. (3) The use of prompt information and multi-modal techniques on explainable methods may be with good value. Weaknesses: (1) The effectiveness of LICO requires further verification. The sensitivity of LICO's saliency to model parameters raises concerns about its locality and uncertainty. Evaluations of more complex images and assessing the model's robustness would be beneficial. (2) The results of the saliency map are not sufficient to support the conclusion of better coverage of comprehensive and discriminative regions. Comparisons with more complex multi-target and multi-class images are preferable. (3) The details of the ablation studies are not clearly presented. The saliency maps generated by CAM-based methods + LICO may be insufficient to qualitatively show the salient regions that the classification model focuses on. It would be better to additionally provide results of “multi-objects single-class” and “multi-class”, except for “single-object”. Moreover, it would be better to conduct additional experiments to evaluate the segmentation and localization performance of LICO for a more comprehensive comparison with other explanation methods. (4) Some basic experiment settings, such as the datasets and base backbones, are expected to be explained clearly. Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) The total loss function in algorithm 1 is inconsistent with that in Eq. (8). The Loss ‘LKL’ in algorithm 1 may need to be changed to ‘LMF‘. (2) Table 2 shows a decrease in classification accuracy for the base model ResNet when combined with GCC and CGC. The previous work cited suggests that good explanatory methods often sacrifice discriminative abilities. It would be helpful to explain how the proposed methods exhibit both good explanatory and discriminative abilities. (3) In Table 5, experimental comparisons based on Lce should also be carried out in order to better investigate the effectiveness of ‘LMF’ and ‘LOT’. (4) What is the effect of MLP used in the text encoder? Since the predicted probability only relies on the trained image encoder and classifier during inference, does there exist another more efficient way of processing the language features? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for appreciating the motivation and the good value of this work. **Q1: Complex images evaluation & model robustness** Great feedback. In addition to Fig. 3, we have provided more results in Figs. 1 and 2 of the Supplementary Material (SM). For example, in Fig. 1(c) of SM, LICO captures more details, such as foot and head, and in Fig. 2 of SM, the attention maps of LICO are more explainable for human vision. For robustness assessment, we evaluated the standard deviations of the Pointing Game (**Table R1**) of different methods. LICO exhibits better results than baseline methods, indicating that LICO is more stable in localization. We will add more attention maps of complex images and assessments to the future version. **Q2: Experiments under settings of 'multi-object single-class' and 'multi-class'** We appreciate your insightful comments and suggestions. Regarding the **multi-object single-class** cases, we would like to draw your attention to Section 1 of Supplementary Material for the results. Furthermore, Fig. 2 also demonstrates LICO's capability to capture multiple entities of identical classes through its attention maps. For **multi-class** setting, we conducted the pointing game on MS COCO 2017 validation set for localization evaluation. Following settings in Score-CAM and Group-CAM, we quantified localization by calculating $\frac{\text{Hits}}{\text{Hits} + \text{Misses}}$, assessing if salient pixels fall within the annotated bounding boxes. **Table R1** shows that LICO consistently improves all the baseline interpretation methods, indicating the effectiveness of regularization by the proposed manifold OT losses. We will certainly add these results in our future version. In the **Global Response PDF** file, we provided more attention maps under ‘multi-object single-class’ and ‘multi-class’ conditions. **Table R1**: Mean accuracy values $\pm$ standard deviation of Pointing Game on MS COCO 2017 val dataset | Methods | Grad-CAM | Grad-CAM++ | RISE | XRAI | Score-CAM | Group-CAM | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | w/o LICO | 56.7$\pm$0.225 | 57.2$\pm$0.227 | 54.3$\pm$0.205 | 55.1$\pm$0.232 | 51.0$\pm$0.211 | 57.5$\pm$0.202 | | w/ LICO | **56.9$\pm$0.221** | **58.1$\pm$0.215** | **55.2$\pm$0.201** | **56.7$\pm$0.229** | **52.5$\pm$0.205** | **58.2$\pm$0.197** | **Q3: More details and basic experimental settings** We will add more details about ablation studies and experimental settings to the future version. **Q4: Inconsistent total loss function between Algorithm 1 and Eq. (8)** Thank you for pointing out this issue. The loss ‘LKL’ in Algorithm 1 should have been ‘LMF‘, which will be fixed in the future version. **Q5: Explanations on how the LICO exhibits both good explanatory and discriminative abilities** Very interesting observation. Based on our assumption that the CLIP text encoder implies 'real-world' semantic space, the text embedding capability is closely associated with the corresponding image space. Hence, $L_{\text{MF}}$ makes the models sensitive to 'real-world' semantic information, guaranteeing discriminative ability towards different classes. Once the image representation is globally aligned with the real-world semantic representation, $L_{\text{OT}}$ facilitates the correlation between visual feature maps and class-specific prompt tokens. For distribution alignment by OT within one image, $L_{\text{OT}}$ leads relatively redundant visual features to those learnable context tokens, highlighting the association between key features and class tokens. One critical reason for decreased classification performance by GCC and CGC is that there are only categorical one-hot labels during training. In contrast, our LICO incorporates generalized semantic information for guiding the image classification network. On the other hand, CGC is tailored for optimizing attention maps through contrastive learning but ignores preserving the discriminative ability of feature representations. **Q6: Experimental results about $L_{\text{CE}}$** First, we would like to clarify that the Ablation studies in Table 5 are obtained by combinations of $L_{\text{CE}}$ and manifold and OT losses. In **Table R2**, we added the results obtained by $L_{\text{CE}}$. $L_{\text{MF}}$ guarantees global manifold correlated with class texts, while only using $L_{\text{OT}}$ rarely enables local alignment between feature maps and prompt tokens, ignoring class information. We observed that $L_{\text{OT}}$ decreases the performance of $L_{\text{CE}}$. This suggests that without global manifold alignment, OT struggles to enhance the discriminative ability of target classes. The reason is that OT focuses on aligning context tokens and feature maps within an individual image. Without correlation to a class-centric manifold, OT will misalign feature maps, resulting in suboptimal performance. **Table R2**: Ablation on $L_{\text{MF}}$ and $L_{\text{OT}}$ | Loss | Top-1 $\uparrow$ | Top-5 $\uparrow$ | Insertion $\uparrow$ | Deletion $\downarrow$ | | :--- | :---: | :---: | :---: | :---: | | $L_{\text{CE}}$ | 76.13 | 92.91 | 53.5 | **13.3** | | $L_{\text{CE}}$ + $L_{\text{OT}}$ | 75.98 | 92.92 | 56.6 | 16.0 | | $L_{\text{CE}}$ + $L_{\text{MF}}$ | 76.18 | 92.90 | 56.9 | 15.5 | | $L_{\text{CE}}$ + $L_{\text{OT}}$ + $L_{\text{MF}}$ | **76.27** | **92.99** | **57.1** | 15.1 | **Q7: Effect of MLP used in text encoder** The MLP ensures that the text representations from CLIP can be aligned dimensionally with latent visual features of various image encoders in our experiment. In practical scenarios, it is possible to find an alternate method to process the text representation, given a fixed dimension. However, we emphasize that such an MLP would only bring minor training costs while not compromising inference efficiency. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' effort in providing the rebuttal, which clarified several of my concerns. It can be claimed, to some extents, that this submission introduces a novel concept of semantic information by effectively aligning the prompt token with the feature map, . However, one aspect still requires further clarification. It remains somewhat unclear whether the proposed loss function genuinely contributes to the enhanced alignment. Despite that the authors added more ablation experiments on the loss function, the experimental outcomes, notably the accuracy results, indicate that the impact of LOT and LMF on performance improvements has not been consistently effective. Therefore, it may be more suitable to keep the current rating. --- Reply to Comment 1.1.1: Title: Effectiveness of proposed loss functions Comment: We appreciate your further feedback. We would like to address your concern about the effectiveness of proposed losses. First, we'd like to clarify again that the presented LICO aims to improve interpretation ability of CNNs while maintaining or enhancing the classification performance. This is essentially challenging as discussed in previous studies such as Grad-CAM, Score-CAM, GCC, and CGC: although improving qualitative (attention maps) and quantitative (Insertion and Deletion) results, they sacrifice the decrease of classification accuracy. As shown in Tabs. 2, 3, and 4, LICO is able to **consistently** improve classification performance even under limited data settings. Most importantly, in Fig. 1 and attention maps provided in the paper and supplementary material, LICO exhibits superior interpretation ability against baseline methods. Second, in Tab. R2, LCE + LMF + LOT consistently improves the model performance. For the results in Tab. 1, LICO obtains better insertion and deletion values than baselines, and in Tab. 2, LICO outperforms CGC and GCC in classification accuracy. In the following Tab. R3, LCE + LOT and LCE + LMF also outperform accuracy of CGC and GCC. Hence, LOT and LMF are effective in improving qualitative and quantitative results compared with baseline methods. **Table R3**: Ablation on $L_{\text{MF}}$ and $L_{\text{OT}}$ | Loss | Top-1 | Top-5 | Insert. | Delet. | | :--- | :---: | :---: | :---: | :---: | | $L_{\text{CE}}$ | 76.13 | 92.91 | 53.5 | **13.3** | | CGC | 74.60 | 92.24 | 52.2 | - | | GCC | 74.40 | 92.12 | - | - | | $L_{\text{CE}}+L_{\text{OT}}$ | 75.98 | 92.92 | 56.6 | 16.0 | | $L_{\text{CE}}+L_{\text{MF}}$ | 76.18 | 92.90 | 56.9 | 15.5 | | $L_{\text{CE}}+L_{\text{OT}}+L_{\text{MF}}$ | **76.27** | **92.99** | **57.1** | 15.1 | Lastly, we highlight the significance of LICO in terms of **harmoniously bridging the gap between better interpretability and competitive classification performance**. Particularly in some real applications like medical imaging and autonomous driving, LICO pioneers an effective way of explainable AI by applying knowledge of LLMs. --- Reply to Comment 1.1.2: Title: We are looking forward to your feedback Comment: Dear reviewer rFY8, Thanks again for all of your constructive suggestions. Hope our previous response can address your concerns. Since the author-reviewer discussion period will end soon in a few hours, we appreciate it if you take the time to read our further response and give us some feedback. If our response resolves your concerns, we are wondering if you would like to re-consider the rating. Thanks for your time and efforts! Best, Authors of Paper 6636
Summary: This paper introduces Language-Image-COnsistent (LICO) to get better interpretation for classification using the Vision-Language model. The proposed framework uses a frozen text encoder and a trainable image encoder to encode text and image information. The text is composed of several trainable prompt tokens and the text label for image classes. Then, the manifold matching (MF) loss is used to align the image feature latent space with the text feature latent space. In addition, another Optimal Transport (OT) loss is used to build a fine-grained correlation between prompt tokens and image features. Strengths: 1. The added text encoder during training time can introduce new information from the text encoder to the image encoder, but won't influence the inference procedure. Therefore, the inference procedure will keep the same as conventional classification models. 2. The insertion and deletion tests are used to validate the generated model interpretations from the proposed method, and LICO outperforms previous interpretation methods such as GradCAM and RISE in most of the cases. 3. The authors conduct experiments on several image classification benchmarks. For ImageNet, LICO obtains higher accuracy than the baselines. For other benchmarks such as CIFAR and SVHN, LICO achieves better performances than the baselines using limited amounts of labels. For fine-grained benchmarks such as Aircraft, LICO also shows better performances under full/few-shot settings in most of the cases. Weaknesses: 1. Did you compare the training time with the baselines? The inference time will be similar, but the training process will involve an extra text encoder. Therefore, it may take longer time to train the model and more space as well. 2. In table 6, do you have the result for 0 learnable context tokens? Technical Quality: 3 good Clarity: 3 good Questions for Authors: see weakness Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful and valuable feedback, pushing us to rethink more comprehensive experiments. **Q1: Training time comparison** Thank you for pointing out this issue. We completely agree that the training time should be compared and discussed. Per your suggestion, we reported the training time of the models with and without LICO on different datasets in **Table R1**. We can see that LICO requires more training time due to the additional forward process of the text encoder and MLP. We argue that given the model's better interpretability and classification performance, the additional training cost is acceptable and compromised as the model training can be done offline. We will certainly add this discussion to the future version and investigate a more efficient training strategy to improve the training efficiency of LICO in the future. **Table R1**: Comparison of training time (sec. per epoch) with and without LICO | ResNet-50 | ImageNet | Aircraft | Flowers | - | WRN | CIFAR-10 | CIFAR-100 | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | w/o LICO | 1200 | 60 | 84 | - | w/o LICO | 80 | 160 | | w/ LICO | 1850 | 86 | 123 | - | w/ LICO | 146 | 371 | **Q2: About 0 learnable context tokens in Table 6** We appreciate your insightful comments. We first would like to note that '0 learnable context tokens' indicates the setting of **no learnable parameters in prompts**, i.e., only the single class tokens are used. We provided the performance of the model learned with 0 context tokens in **Table R2**, which are relatively worse than others. This is partly due to the facts: (1) the original text encoder of CLIP was trained with prompt engineering rather than single-class tokens, and (2) CLIP has validated the superiority of learning with prompt engineering over single-class tokens. Our results are consistent with the finding in the literature that prompt learning methods like CoOp [1] and CoCoOp [2] have also demonstrated the effectiveness of learning with trainable context tokens when adapting to downstream tasks. We will update the result of Table 6 in the future version. **Table R2**: Ablation on no. of context tokens | no. of context tokens | Top-1 $\uparrow$ | Top-5 $\uparrow$ | Insertion $\uparrow$ | Deletion $\downarrow$ | | :---: | :---: | :---: | :---: | :---: | | 0 | 75.64 | 91.92 | 54.1 | 17.8 | | 4 | 76.03 | 92.74 | 55.2 | 17.5 | | 8 | 76.09 | 92.89 | 56.3 | 16.0 | | 12 | **76.27** | **92.99** | **57.1** | **15.1** | | 16 | 76.21 | 92.87 | 57.0 | 15.8 | |20 | 76.14 | 92.93 | 56.9 | 15.5 | > [1] Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Learning to prompt for vision-language models. International Journal of Computer Vision, 130(9):2337–2348, 2022. > > [2] Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Conditional prompt learning for vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16816–16825, 2022. --- Rebuttal Comment 1.1: Title: Response Comment: Thank the authors for providing additional results. The results for 0 learnable context tokens makes sense based on the explanations. After reading all other reviewers' responses, I agree with Reviewer V7Pm that the backbone should be further discussed as ResNet50/18 are both CNN-based models. Grad-CAM can be easily adapted to ViT, therefore I also want to see if the proposed LICO can still be useful for transformer-based models. Missing discussions for ViT actually limits the scope of this approach, and the paper will be more interesting if the authors could validate the proposed method on SOTA image classification models. --- Reply to Comment 1.1.1: Title: Experimental results with Transformer-based model Comment: We thank the reviewer for the further feedback. To address the major concern on the backbone, we trained a ViT-Base-16 network on ImageNet-1k dataset, and provided accuracy, insertion, and deletion in **Table R3**. We calculated the $L_{\text{MF}}$ and $L_{\text{OT}}$ between language tokens and representations of patch tokens and class token. For attention maps, we applied Grad-CAM in LICO-trained ViT-Base-16 through calculating gradients from outputs to the last attention layer of class token. **Table R3** further confirms that the transformer model with LICO not only **performs better** but also **gains better interpretability** than the one without LICO, which is in line with the finding for CNN-based backbone. In our future work, we will further develop more explainable decision clues for ViT by incorporating knowledge of LLMs into self-attention. **Table** **R3** Evaluation on ImageNet-1k using ViT-Base-16 | ViT-Base-16 | Accuracy$\uparrow$ | Insertion$\uparrow$ | Deletion$\downarrow$ | | --- | --- | --- | --- | | w/o LICO | 77.9 | 55.2 | 14.4 | | w/ LICO | **78.2** | **56.0** | **13.8** | --- Reply to Comment 1.1.2: Title: We are looking forward to your feedback Comment: Dear reviewer tr3q, Thanks again for all of your constructive suggestions, which have helped us improve the quality and clarity of the paper! We’d like to note that **the concern of evaluating LICO on ViTs, initially raised by reviewer V7Pm, has been confirmed addressed**. Specifically, the additional Table R3 about ViTs also verifies the major merit of LICO that simultaneously improves interpretability and classification performance, which is not shared in previous popular and widely used interpretation methods. Since the author-reviewer discussion period will end soon in a few hours, we appreciate it if you take the time to read our further response and give us some feedback. Please don't hesitate to let us know if there are any additional clarifications or experiments that we can offer. If our response resolves your concerns, we kindly ask you to consider raising the rating of our work. Thanks for your time and efforts! Best, Authors of Paper 6636
Rebuttal 1: Rebuttal: ## Global Responses with a PDF file **Comments**: Dear Reviewers, We thank all the reviewers for their thorough summaries and valuable feedback. The reviewers appreciate that our LICO is novel and well-motivated (**rFY8**, **V7Pm**, **r2cD**) with good value of incorporating language prompts into explainable AI (**rFY8**, **r2cD**), the experiments are comprehensive (**rFY8, V7Pm, r2cD**) and demonstrate the effectiveness of LICO (**r2cD**), good performance (**tr3q, rFY8, r2cD**), and inference efficiency (**tr3q**), while the paper is well-organized and easy to follow(**rFY8**, **V7Pm**, **r2cD**). We have posted detailed responses to each reviewer and deeply appreciate your further feedback on whether our responses adequately address your concerns. If you have any additional comments or questions, we will try our best to address them. Per the Q4 of **V7Pm**, Q1 and Q2 of **rFY8**, the following **attached pdf file** provide more attention maps of complex images: - Figure 1: Attention maps of single-class multi-object images. - Figure 2: Attention maps of multi-class multi-object images. Best, The authors Pdf: /pdf/4b7af5c0782c80f19edc364c030086e83946caf8.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
FlowCam: Training Generalizable 3D Radiance Fields without Camera Poses via Pixel-Aligned Scene Flow
Accept (poster)
Summary: This paper presents a generalizable framework for estimating both the NeRF model of the scene and the camera pose for video sequence simultiunusly. The proposed method first uses the pretrained model to predict the optical flow for each video frame. Then, the optical flow is lifted up to the monodepth to scene flow. The camera poses are solved by SVD with scene flow. After computing the camera pose, the nerf model is further fine-tuned for a better result. Strengths: - a light step of unposed nerf in RGB video sequence setting. - the quality of the rendered RGB image looks good after fine-tune. - well written, easy to read and follow. Weaknesses: - Statement - Line 27-31 - Although the manuscript does acknowledge certain NeRF-based SLAM methods for their ability to provide dense 3D reconstruction, relevant references [1,2,3] are conspicuously absent. Additionally, some NeRF-based SLAM methods that utilize prior for pose estimation have been glossed over. - Line 34 - The cited works do not adequately represent the body of literature since both methods focus exclusively on depth map estimation. This oversight should be rectified by including relevant citations [4,5]. - Line 143 - wrong format - Motivation - The motivation for introducing single view generalizable nerf for relative pose estimation is unclear and wired. - Line 142 - the definition of the x_t. In equ2, x_i is the sampled points along the ray. Does x_t is the S(r) in equ2? - while pixelnerf can use only a single view to render a novel view, it is much like a mono-depth estimation module. Without optical flow, an alternating solution uses iNeRF to align two single views point cloud/nerf, minimizing photometric loss. However, even most SOTA mono-depth estimation methods are insufficient to estimate accurate relative pose due to cross-view scale misalignment. Some unposed/posed Nerf only use mono-depth as a reg term. This is why the proposed method **HAS TO** introduce the out-of-shelf optical flow estimation for pose estimation. If the single view pixelnerf is removed from the proposed method, optical flow, as the correspondences, is enough to estimate the essential matrix/relative pose between two neighbor frames. Furthermore, the deep-sfm module, like banet, droidslam, deepv3d, all of these methods take multiview images to estimate both multiview depth map and the camera pose at the same time. The author should carefully address this issue during the rebuttal. - the motivation of flow confidence weights. - Line 143 - This part could be easily solved by bi-direction consistency check with optical flow. Introducing another pretrained network for weight prediction will make the generalization issue worse. - Figure 9 does not show the results with dynamic objects. - Line 178 - RGB VO/SLAM confronts scale ambiguity problem. The proposed method doesn't handle it at all. - Line 194 - where is the definition of N, J? - Line 203 Equ8 - The mono-depth is enforced by this loss term during fine-tune. As mentioned above, the optical flow for relative pose estimation is enough. - This loss term does not handle singular solutions. - Experiment - Line 211 - the training detail is missing. - the proposed method cannot run on a long video clip. - Most of the experiments show the object-center case. It will be great to see the results on the inside-out case either. - Although the authors mention that they do not claim sota accuracy of pose estimation, the metrics of the pose are still necessary to be reported on the main paper, including droid slam, droid vo, orb slam, etc. - Line 224 - test-time optimization on the weight of CNN and MLP on a single video clip in a self-supervised manner is extremely hard. The problem will become worse if the precomputed optical flow is removed. The author should provide more detail about this section. - Line 250 - more baseline methods are required, such as nicer-slam[2], dim-slam[3], droid-slam. - some unposed nerf methods with mono depth estimation module, such as nope-nerf[6] - and another test-time optization method, such as CVD[4], RCVD[5]. - The pose trajectories in the figure are misaligned in both the main paper and the supplementary, which are very confusing. - reference - [1] Sucar, E., Liu, S., Ortiz, J., & Davison, A. J. (2021). iMAP: Implicit mapping and positioning in real-time. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 6229-6238). - [2] Zhu, Z., Peng, S., Larsson, V., Cui, Z., Oswald, M. R., Geiger, A., & Pollefeys, M. (2023). Nicer-slam: Neural implicit scene encoding for rgb slam. arXiv preprint arXiv:2302.03594. - [3] Li, H., Gu, X., Yuan, W., Yang, L., Dong, Z., & Tan, P. (2023). Dense RGB SLAM with Neural Implicit Maps. The Eleventh International Conference on Learning Representations(ICLR) - [4] Luo, X., Huang, J. B., Szeliski, R., Matzen, K., & Kopf, J. (2020). Consistent video depth estimation. ACM Transactions on Graphics (ToG), 39(4), 71-1. - [5] Kopf, J., Rong, X., & Huang, J. B. (2021). Robust consistent video depth estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1611-1621). - [6] Bian, W., Wang, Z., Li, K., Bian, J. W., & Prisacariu, V. A. (2023). Nope-nerf: Optimising neural radiance field with no pose prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 4160-4169). Technical Quality: 2 fair Clarity: 3 good Questions for Authors: I am happy to adjust the rating if the author properly addresses the concerns mentioned during the rebuttal. ---- after rebuttal: I adjust the rate to the positive side. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### iMAP and robust unposed NeRF We added iMAP methods to the related works and NeRF methods. Note that iMAP addresses a separate problem: our contribution enables end-to-end training of pixelNeRF without precomputed poses, whereas iMAP performs offline, gradient-descent based pose and NeRF optimization. ### Missing references to Consistent Video Depth Estimation and Robust Consistent Video Depth Estimation We will add CVD and RCVD to our related work. CVD/RCVD fine-tune a pre-trained depth network for temporal consistency, whereas we train a generalizable 3D scene representation; they are not significantly related. ### Typo on line 143 Thanks - fixed! ### pixelNeRF for pose estimation is weird We use the same network for pose estimation and rendering since a) it forces our pixelNeRF to learn good geometry in order to predict correct poses b) it avoids scale inconsistency between poses and rendering. As for why we involve geometry in pose estimation, instead of a 2D solver, see our response below on that point. We can also use a separate depth network instead of re-using pixelNeRF; see Tab. 2, “Depth Regression” of the response PDF to see that it works almost as well. ### 142 ambiguity x_t and x_i both refer to 3D points, where x_i in eq2 refers to the 3D point at the i’th sample along the ray, and x_t refers to the 3D point observed at pixel p_t. We will amend the text accordingly to make this notation more explicit. ### iNeRF formulation iNeRF requires a pre-trained NeRF for pose inversion, which we do not have, and requires many gradient descent steps to align the model for accurate poses, which we cannot afford to embed in the training loop. ### 2D correspondences sufficient for pose estimation Solving for the essential matrix from 2D correspondences introduces scale ambiguity between the 3D scene representation and estimated poses: the pose from the 2D correspondences is extremely unlikely to be the scale appropriate for the pixelNeRF’s constant near and far plane. See Tab. 2, column “2D-Only Pose Solver” of the response PDF; renderings catastrophically fail with this approach. ### DROID-SLAM uses just 2D correspondences Using any of these methods incurs the same scale ambiguity between generalizable neural scene representation and pose estimation described above and in the overview text. Also note that none of these methods solve for a 3D scene representation, but rather solve for depth maps and poses. ### Flow confidence weights; bidirectional flow check simpler; generalization is now worse See our overview discussion on flow confidence weights and dynamic object masking. We ablate this bidirectional flow check in Tab. 2 (“Bidirectional Consistency Flow Weights”) of the response PDF and report a significant decrease in rendering quality. Also consider this estimation asks how similar two features are and tends to generalize well. ### Proposed method doesn't resolve scale ambiguity Please see the overview text for expanded discussion on the scale ambiguity we address, which is between the poses and 3D scene representation, and how our method resolves it by using the same geometry estimation for pose estimation and rendering. ### Missing definitions (194,203) Thanks! We’ll amend the text to be more explicit. N refers to the number of timesteps or frames and J refers to the subset of context images used for rendering. ### Scale resolved with fine-tuning The fine-tuning mechanism is not involved during training of the scene representation; see the overview discussion on fine-tuning and scale ambiguity. We cannot afford to perform fine-tuning during the forward pass. ### Our loss term does not handle singular solutions If there is a singular solution due to textureless images, it should not affect the rendering or 3D scene representation, and is not a priority since our interest is the 3D scene representation. We will add this discussion to our limitation section. ### Sliding window experiments details; Cannot run on long sequences The sliding window method is an offline extension for explicitly chaining predicted camera poses on subsequent video subsequences to accommodate longer sequences. There is no optimization when using the sliding window approach; we simply query our trained model (such as on CO3D). While our method does not accommodate long sequences, the ~30 frames with considerable frameskip we use is within the pose distribution typically used for training such 3D scene representations. ### Inside-out scenes Only CO3D results are outside-in; please see are results on RealEstate10K and KITTI which are inside-out. ### Comparison with ORB-SLAM and DROID-SLAM We compare with ORB-SLAM and DROID-SLAM in Tab. 1a of the author response PDF and Fig. 6 of the supplemental PDF. We outperform both ORB-SLAM and DROID-SLAM in this setting. ### Test time optimization of CNN and MLP difficult and expensive Please see the overview discussion on the application of fine-tuning, which is not core to our method pipeline. Note that a) the fine-tuning results we demonstrate are on our pretrained model, not from scratch, b) the optimization is empirically robust, and c) it is not considerably more expensive to fine-tune the image features as well. ### DROID-SLAM/Dim-SLAM/NoPe-NeRF/RCVD baselines See Tab. 1 and Fig. 1b for comparison to NoPe-NeRF and Tab. 1a of the author response and Fig. 6 of the supplement for comparisons to DROID-SLAM. Since CVD aims to fine-tune a monocular depth estimator to be temporally consistent for a single video, instead of training a 3D scene representation, we respectfully disagree with CVD or RCVD being apt comparisons, but if you feel otherwise we can discuss it and evaluate the comparison. ### Misaligned pose plots We assume you refer to the reversed captions of figure 4, which we will amend in the paper revision, but otherwise if you can be more specific about which poses are misaligned we will be happy to correct it. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. Although I still have concerns about the `scale drifting`, which is not the `scale ambiguity issue responded to by the author, most concerns are properly addressed during the rebuttal. Some missing Nerf-slam-based methods still need a proper reference. --- Reply to Comment 1.1.1: Title: Follow-up response on scale drift Comment: Thanks for responding and again for the detailed review, as well as for the the clarification on the type of scale discrepancy referenced. Since our method makes frame-to-frame estimates, as opposed to performing multi-frame optimization, indeed scale drift error could potentially accumulate in the feedforward setting. Adding a scale parameter to the Procrustes could address this, but we found that forcing the model to use a single scale yielded more stable training (Tab. 2 of author response PDF). Enforcing a single scale also has the benefit of more temporally consistent depth estimates, as opposed to the scale-invariant depth estimated commonly by monocular depth networks which are notoriously difficult to normalize correctly. Multi-frame feedforward optimization is exciting future work. We will make sure to add the NeRF-SLAM methods mentioned.
Summary: This paper proposes a method to address the challenge of reconstructing 3D neural fields from images and learns them in a self-supervised manner. The main contribution of this method is the joint reconstruction of camera poses and 3D neural scene representations within a single forward pass. Strengths: 1. This paper achieves state-of-the-art performance in its specific setting. 2. The writing in this paper is clear. 3. This paper addresses an important problem. It overcomes the previous limitation in training generalized Neural Radiance Fields that require posed videos. It enables direct training on unposed video data. 4. This paper is innovative in its methodology and no one has utilized similar approaches to address this problem before. Weaknesses: 1. The paper lacks clarity in explaining the ablation study, such as the incomplete caption for Table 3. Moreover, the abbreviations used in the table are not defined, making it unclear what they refer to. 2. The paper lacks ablation experiments on not learning intrinsic camera parameters. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I cannot understand how this paper addresses the generalization issue. Why does PixelNeRF have good generalization capabilities in monocular depth prediction? 2. How does this paper address the problem of scale ambiguity in monocular depth estimation? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: This paper discusses its limitations. However, I would have expected to see more failure cases from internet data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### The ablation study is unclear in its abbreviations and references We agree and have updated the main paper’s ablation table with expanded ablation references as well as its corresponding caption and experiment text. Please see author response PDF Tab. 2 for expanded reference names and caption. In case it is useful for correspondence, note that in the submission’s ablation table “MLP-Pose” refers to pose regressed by an MLP (as opposed to using a solver); “No Flow Weights” refers to using the pose solver but not using the confidence weights to guide the optimization; “Full” refers to the full model of using the solver with the confidence weights. ### This paper lacks ablations on not learning intrinsics parameters In this paper we use intrinsics parameters when known and only predict them otherwise, such as on the YouTube videos where no such information is available. We noted this on line 93 but we will make this more explicit in the experiments section as well. We also ran an additional ablation experiment on using constant and predicted intrinsics, reported in Tab. 3 of the author response PDF. Surprisingly, using constant intrinsics performs similarly to using SfM-estimated intrinsics, and predicting them actually outperforms using the SfM-estimated intrinsics. While this is surprising, one reason could be that the dataset intrinsics are not perfect but estimated by SfM or that our model may be predicting intrinsics which better correspond to its geometry estimate. ### Not sure how this method addresses the generalization issue of pixelNeRF PixelNeRF has been shown in several instances to be a robust geometry estimator, used for depth estimation (see “Behind the Scenes: Density Fields for Single View Reconstruction”, e.g.). It has been demonstrated that pixelNeRF generality scales favorably when increasing dataset size (see “Objaverse-XL: A Universe of 10M+ 3D Objects”). In general, depth estimation is a subset of the 3D geometry that pixelNeRF estimates. Our key contribution is a step towards training of pixelNeRF on internet-scale, uncurated video datasets. This would dramatically improve the generality of pixelNeRF, as the training distribution could now essentially be all natural videos on the internet. While some steps towards that goal are outstanding (such as addressing dynamic 3D reconstruction), we believe that our paper takes a significant step towards that goal. ### How does this paper address the scale ambiguity in monocular depth estimation? Most monocular depth estimators output depth in an arbitrary scale, and normalizing their outputs is notoriously difficult, even for data from the same domain (such as driving scenes). The reason for this is related to the scale ambiguity we describe in the overview discussion, which boils down to inconsistent scale of camera poses from different videos. More specifically, the depth targets used to train those depth estimators are often only known up to scale, which makes training accordingly challenging when treating those as metric targets. Our model naturally resolves this scale ambiguity by predicting camera poses using our model’s rendering geometry. That is, since our camera poses are a function of our model’s predicted geometry, they are inherently of the same scale, removing any ambiguity introduced by depending on SfM per-scene optimized poses. We assume that is the ambiguity you are referring to, but another ambiguity you might be referencing is the inherent scale ambiguity between our scene representation and the metric scale of the real world, which is impossible without a metric reference (such as RGBD). And lastly note that the scale ambiguity we refer to addressing in the text and the overview discussion is rather the difference in scale between the estimated poses (traditionally estimated via COLMAP) and the generalizable scene representation (which is regressed during training). See the overview discussion for an elaboration on this ambiguity and how it otherwise poses a significant challenge for training 3D scene representations such as pixelNeRF, which we overcome. ### Would expect to see more failure cases on internet data Indeed, since we have not trained our model on a large-scale internet dataset, our model does not generalize perfectly yet to random internet videos, for reasons which include non-robust intrinsics prediction and out-of-distribution geometry estimation. Note that we train our pixelNeRF on the same scale of datasets typically used to train pixelNeRF (CO3D, e.g.), and just aim in these experiments to reproduce the same level pixelNeRF, without depending on precomputed poses, rather than increase generalization. The point made in our presentation of fine-tuned results for a few internet videos was to simply show that we can optimize on in-the-wild videos, and that although we do not yet scale up training to internet-scale data (see outstanding bottlenecks mentioned above), our model takes a large step towards enabling such internet-scale training. We will further include an additional set of results in the supplemental material of our model on random internet videos. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal, which has well addressed my concerns. So I keep the positive side.
Summary: This paper proposes a general method for 3D neural scene reconstruction and camera pose estimation from a video sequence. The method takes a set of video frames as input and outputs the re-rendered video frames and estimated camera poses. The method is based on PixelNeRF, which is a general NeRF method that takes extracted image features as input. The method first uses single-view PixelNeRF to compute per-frame point clouds. Then, the frame-to-frame 3D scene flow is estimated by leveraging the estimated 2D optical flow. The 3D scene flow is then used to constrain the frame-to-frame camera poses, which are solved by minimizing a weighted least-squares problem. Two losses are used to supervise the training: the RGB loss, which forces the PixelNeRF-based rendered images to be close to the input images; and the flow loss, which makes the projection offset of neighbor frames' point clouds close to the 2D optical flow. Strengths: 1. A nice solution to learn a general NeRF representation for multiview images without camer pose information. 2. The quality of the rendering results for testing multi-view images are impressive. Weaknesses: The experiments can be improved to further verified the advantage of the proposed method: 1. Only one quantitative pose estimation comparison is shown, there is no quantitative comparisons with BARF. Besides, Nope-NeRF is a similar method that takes sequent frames as input for pose estimation, it better have one more comparison with Nope-NeRF. 2. The quantitative results in Tab 1 are fine-tuned or not? It should be better to show both results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Overall, the proposed method is reasonable, but I do have questions: 1. There should be a scale problem when estimating the per-view point cloud by single-view pixelNeRF. It seems that the Eq (4) didn't consider it, why? 2. The confidence weight is output by a network. Therefore, in my understanding, this network can implicitly handle the occlusion and specular by downweighting corresponding flow. But it seems that the Masked RGB images shown in Fig 9 mask out too much content (even for textured diffuse regions). Is this because of the inaccurate 2D optical flow estimation? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The proposed method learns a conditioned generaI NeRF network for multiview images. It is still limited since the images should be kept to accompany the learned model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Only one quantitative pose estimation is reported, and we should compare with a more recent unposed NeRF method (such as NoPe-NeRF) Please note that we extensively benchmark with the appropriate baselines of RUST and VideoAutoencoder, including quantitative evaluations of pose estimation on each dataset in Tab. 2 of the main paper. We also include comparisons to ORB-SLAM in Tab. 2 of the main paper and add DROID-SLAM quantitative comparisons in Tab. 1a of the author response, as well as qualitative pose estimation results to both methods in Fig. 6 of the supplemental PDF. We outperform all baselines considered for pose estimation on these difficult sequences. As for the unposed NeRF family of methods as baselines, note that their goal is orthogonal to ours, in that they aim to optimize a NeRF for a single scene without poses, whereas we are interested in training a generalizable NeRF representation without poses. See the general overview response for further discussion on this point. With that distinction made, we agree that our experiment would be stronger with a more recent and robust unposed NeRF method. See the author response PDF, Fig. 1b and Tab. 1b for quantitative and qualitative evidence that we outperform both BARF and NoPe-NeRF for pose estimation. We will include these results in the experiments section of the main paper. ### Fine-tuned results ambiguity In general, all results in the paper are zero-shot (not fine-tuned), unless explicitly marked as fine-tuned. We will clarify this distinction in the experiments section of the main paper. Also note that the fine-tuning mechanism is not part of our model pipeline, which aims to remove precomputed poses from the training process of generalizable 3D representations, but rather a demonstration that our self-supervised formulation allows us to quickly optimize on any scene for higher-quality novel view synthesis. See the overview discussion for further elaboration on this point. ### Scale ambiguity in single-view depth estimation that could have been accounted for in Procrustes estimation This is an insightful point and while your analysis is correct, in practice we omit it to encourage the model to learn a consistent scale, and observe that the model’s depth estimates are generally of the same scale. Surprisingly, we found that constraining the scale to be the same across the video is critical and the model performance suffers with the scale degree of freedom. See Tab. 2 (column “Scale Adjusting Procrustes”) of the author response PDF for an explicit evaluation demonstrating this. ### Confidence maps are concerningly sparse, even discounting textured diffuse regions This is an acute observation and the weights are indeed relatively sparse, but note that our model theoretically only needs four points to solve for correct pose, and our model can therefore afford to choose fewer but more accurate correspondences. Also note that our model can also mask out parts of the scene where it is not confident about the geometry, such as sky or distant regions, not just bad correspondences. Please see Fig. 1a of the author response PDF for an illustration of our model’s flow weight mask including dynamic objects, as well as the overview discussion for further elaboration on the flow weights. ### The method is limited since the images should be kept to accompany the learned model While image-conditioned scene representations, such as pixelNeRF, keep the images during rendering, note that all NeRF-based representations keep around some representation, whether it be the weights of a large global network or the densities and radiance of a voxel grid. Also note that one could potentially exchange an image-conditioned scene representation to a voxel-grid based one via sampling the 3D radiance field at the voxel locations, or alternatively distill the representation into a global network. We agree nonetheless that optimizing a global NeRF, such as BARF, and not keeping keyframes is an exciting and self-contained direction for future work. --- Rebuttal Comment 1.1: Comment: Thanks for clarifying my questions. Please incorporate the materials in the rebuttal into the revision
Summary: The paper proposes using scene flow to optimize for camera poses to produce generalizable 3D radiance field. The key contribution is the joint optimization of the camera poses and 3D neural scene representations in an single forward pass. The method has been evaluated on multiple datasets and performs well on datasets where traditional pose estimation fails. Strengths: The motivation for using an joint optimization of camera poses and reconstruction has wide applications and easy to scale. As proposed by the current method multiple views of a scene inherently produce good correspondences, which leads to reasonable optical flow estimates. Using and incorporating optical flow into the 3D pose estimation and reconstruction helps overall reconstruction pipeline. Using optical flow for reconstruction and Neural radiance fields without cameras poses have been well independently. Combining them into a single formulation is still less explored. the current method lifts the optical flow to scene flow using neural scene representations and uses them to optimize. This is interesting direction to explore. The results shows that the method is able to perform better than traditional pose estimation methods like ORB-SLAM in some of their failure cases. Weaknesses: The current method proposes a tradeoff by stating instead of using camera poses first and neural radiance field later as a two stage process, joint optimization is a more promising direction. but using joint optimization adds additional optimization constraints which have not been extensively discussed. It would be nice to see what are the failure cases of using the joint optimization compared to the two-stage optimization process. The camera pose estimation has only been evaluated with ORB-SLAM3 or VidAE both of which do not incorporate neural radiance fields for 3D pose estimation. Recently there have been better methods which optimize for both radiance fields and pose simultaneously. having such baseline will give a better understanding of the accuracy of the algorithm. The number of frames used for error computation is very small. i.e. 20 frames and 200 frames. The advantage of the method states that the joint optimization is beneficial to scale easily. Showing the method accuracy compared to orb on larger datasets or sequences like Technical Quality: 3 good Clarity: 3 good Questions for Authors: in Fig3 results were shown using BARF for pose estimation, why are there no quantitative results using the same? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations have been well discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Pose estimation is only compared with non-NeRF based methods, and unposed NeRF methods have been proposed which yield better results with a more accurate comparison We add a NoPe-NeRF comparison (see author response PDF, Fig. 1b and Tab. 1b); we succeed where BARF and NoPe-NeRF fail. Also recall that the unposed NeRF family of methods has an orthogonal goal to ours, in that those methods jointly optimize poses and NeRF for a single scene, whereas our aim is to train a generalizable scene representation. ### Pros and Cons compared to two-stage approach, failure cases of proposed joint optimization of poses and NeRF Using a two stage approach usually involves an non-differentiable pose estimator, either running in near real-time or offline. For the offline two-stage approach, such as using Colmap, the benefits include that a) it’s less expensive, b) there exists a more developed ecosystem of optimization tools for it, such as intrinsics solvers and loop closure mechanisms, and c) it can optimize on longer sequences with greater accuracy. The drawbacks are that it is prohibitively slow and can not be embedded in a real-time training loop and is known to still regularly fail, particularly on rotation-dominant sequences. Online non-differentiable systems, such as ORB-SLAM, have the benefit of being fast enough to run alongside the training of a downstream task, with the main drawback being that they often fail due to their lack of a scene prior. See Tab. 1a of the author response PDF where we show that ORB-SLAM loses tracking on over half of the CO3D sequences. Joint optimization methods have the benefit of being differentiable, meaning we can resolve the scale ambiguity between poses and renderer (discussed extensively in overview discussion), are real-time, and improve with the rendering loss as well. The main failure mode is that the poses are less accurate than an offline-optimization method, such as Colmap, and the renderings are proportionately blurry to the pose inaccuracy. Please see the supplemental document (Figures. 2 and 6) for blurred renderings and non-perfect pose estimations. Joint optimization further also incurs additional cost to a two-stage approach, as we have to additionally estimate geometry and poses for each timestep in each forward pass. ### Number of frames is very small, which contradicts the claim that our contribution is to scale up training Note we are referring to the goal of scaling up the size of the datasets, e.g. moving from Co3D or ShapeNet to all of YouTube, rather than scaling up the length of the videos. Though our experiments indeed usually span 20-30 frames instead of the 1000 frames used for odometry evaluations, note that we often use a considerable frameskip between images, and more sophisticated pose estimation algorithms in future work will likely be able to accommodate even larger frame skips. For instance, the 200 frames cited for the Tanks and Temples are downsampled from the original video to just 1 FPS, spanning a full circle around the Excavator, and that CO3D long-trajectory evaluations also use a significantly low framerate, such that the 200 frames typically similarly circles the full object. The resulting camera baseline from our framerate downsampling is appropriate for our goal of training a generalizable pixelNeRF and within the pose distribution typically used to train these methods. ### BARF comparisons should have quantitative pose evaluations We have added quantitative comparison to BARF and NoPe-NeRF (Tab. 1b of author response PDF).
Rebuttal 1: Rebuttal: We appreciate the time and energy the reviewers have invested in reviewing our paper and for offering insightful and constructive feedback, which will make our paper clearer and stronger. We are glad that reviewers recognized our paper’s contribution in addressing an “important problem” (kV1r) in training a “general NeRF representation without camera pose information” (3ZaF) which has “wide applications and is easy to scale” (oZ25) and “achieves state-of-the-art performance in its specific setting” (kV1r). ## General Clarifications ### Paper goals and their difference to conventional Odometry & SLAM methods. Our goal is to build a neural network that can jointly predict a 3D radiance field as well as camera poses in a single forward pass, without the need of off-line pre-processing training videos with Structure-from-Motion. We succeed at this goal and train pixelNeRF end-to-end on challenging real-world video with significant camera rotation. On the challenging CO3D 10-Category video dataset, our method obtains more accurate poses than ORB-SLAMv3 and DroidSLAM (see author reply, Table 2a). Thus, we do claim - and demonstrate - that training pixelNeRF with poses obtained from either DroidSLAM or ORB-SLAMv3 would perform worse than our method. However, we explicitly (Lines 78, 219) do not claim that our method is competitive with offline Structure-from-Motion, such as Colmap, especially on long-horizon pose estimation, where loop closures and global pose graph optimization are critical. However, this is irrelevant to our goal of removing the SfM pose bottleneck from training these methods. To illustrate this point of the computational infeasibility of scaling up SfM methods, consider that the CO3D dataset totaled over 5 GPU years to estimate poses for, and CO3D is an insignificant fraction of the size of internet-scale datasets. ### Fine-tuning mechanism and comparison with single-scene unposed NeRF methods, such as BARF and NoPe-NeRF. While our key contribution is training of pixelNeRF with a feedforward pose estimate, the fine-tuning serves to demonstrate that for longer sequences, our self-supervised loss yields gradients that lead to high-quality novel view synthesis within minutes. We have added comparisons to both BARF and NoPe-NeRF to Section 4; please see the author response PDF, Fig. 1b and Tab. 1b. BARF and NoPE-NeRF both fail to capture the rotation-dominant pose distribution; our method outperforms both of them. ### Scale ambiguity The scale ambiguity we refer to resolving is the discrepancy between the world scale estimated by the offline pose-estimation method (usually COLMAP), and the world scale estimated by the generalizable 3D scene representation. To train these 3D scene representations, it is assumed that there is a consistent scale of the poses which the scene representation can learn to predict geometry in. Several view synthesis methods (GenVS, GPNR, Diffusion with Forward Models) introduce their own nontrivial scale-normalization steps and note that they are critical for stable training. ## Additional Experiments In addition to reviewer-specific experiments, we highlight a few experiments common to all reviewer requests below. ### More robust and recent unposed NeRF methods (NoPe-NeRF) We added a NoPe-NeRF comparison in addition to BARF; see Fig. 1b and Tab. 1b of the author response PDF. Despite NoPe-NeRF’s monocular depth prior addition to BARF, both do not capture the correct pose distribution. ### Scale ambiguity demonstration To demonstrate the difficulties involved with the mentioned scale ambiguity, we randomly scale the poses estimated by our model before rendering and report the corresponding decrease in rendering quality in Tab. 4 of the author response PDF. ### Confidence weights clarification and additional demonstration As reviewer 3ZaF mentions, the flow weights that our model estimates are quite sparse. Since our pose estimation theoretically only requires 4 3D correspondences, our model can afford to be selective for good correspondences. Also note that our model not only masks out bad correspondences and nonrigid scene elements, but also areas where it has geometry uncertainty, such as in the sky region. Note that this flow weighting is critical to robust pose estimation, which we ablate in Tab. 2 of the author response PDF (“No Flow Weights” column). We include an instance in Fig. 1a of the author response PDF where our model indeed masks out a dynamic object (a bicyclist), but note that because the flow weights are relatively sparse, we are not suggesting that our model produces a tight “dynamic object mask”, but rather mention dynamic objects being masked to provide intuition about why the flow weights are critical. Pdf: /pdf/541054cbad89cf854b8598ff4b6e09117add5526.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Factorized Contrastive Learning: Going Beyond Multi-view Redundancy
Accept (poster)
Summary: The paper introduces FactorCL, a novel method for learning multimodal representations. The proposed method generalizes traditional MI maximization-based approaches by capturing both shared and unique information relevant to downstream tasks. Based on the information-theoretic perspective, the paper demonstrates that traditional CL approaches, based on shared task-relevant information maximization, perform sub-optimally in the presence of non-shared task-relevant information, which is an easily imaginable case in a multi-modal setting. From here, the paper proposes a novel CL objective and augmentation strategy to address the uniqueness gap. Experimental evaluation demonstrates that the proposed method outperforms several existing CL approaches in multi-modal setting. Strengths: 1. The work is very timely. The paper addresses an important question of adopting CL methods for multi-modal scenarios. 2. Paper is clearly written and easy to follow. 3. The proposed method can be seen as an information-theoretic generalization of base contrastive learning approaches, which only address shared task-relevant information. This is a natural extension. 4. Section 2 provides an intuitive information-theoretic analysis of multi-modal CL. Figure 2 gives a nice overview of the bounds and their role in the objective. 5. Experimental results clearly demonstrate the benefits of FactorCL. Weaknesses: 1. If the information is modality "unique", but task-relevant, shouldn't it end up in the shared task-relevant partition by means of simply optimizing for the shared information (as non-shared info is implicitly minimized this way) as standard CL methods do? 2. If weakness 1 holds, then what is the reason why standard CL methods fail? Can it be solely attributed to sub-optimal augmentation, which limits the shared information? 3. Table 2 reveals that most of the performance increase can be attributed to improved augmentation. Is it connected to Weaknesses 1 & 2? 4. L 31-39. Isn't low shared information a direct consequence of high unique information (and vice versa)? If yes, then there is no need to define it as a separate imitation. Minor: - Figure 2 typo X1 (modality 2) - Eq (1) typo I(x1, x2, y) -> I(x1, x2 | y) - Formatting of Table 2 can be improved, the top row X1;X2 is hard to read - L171-174 is confusing. Does it mean X1 and X2 are concatenated with the encoded features of (X1' & X2')? Technical Quality: 3 good Clarity: 3 good Questions for Authors: My questions are in Weaknesses 1,2,3,4. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I suggest the paper include a separate section on the limitations of the methods, where authors elaborate on challenging scenarios, where they expect the proposed approach to not deliver substantial improvements. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and insightful comments! We respond to some concerns below: [Unique partition] We define shared information as $I(X_1; X_2; Y)$ and unique information as $I(X_1; Y | X_2)$ and $I(X_2; Y | X_1)$ based on information theory, and from this definition the two areas of shared and unique information are strictly disjoint. **Note that multimodal and multiview are different: in multimodal, the $X_1$ and $X_2$ are fixed, so the shared and unique regions are fixed. In multiview, a single input $X$ is augmented into $X$’, so there are ways to control how much shared and unique information there is between $X$ and $X’$ [Tian et al., InfoMin].** In practice, Theorem 1 shows the result of CL on multimodal data - it will discard unique information according to our definition. Figure 1 shows this in practice, with a gradual decrease of standard CL in performance as the ratio of unique information increases. [Is Weakness 1 true, and why does standard CL fail] Weakness 1 does not hold, as we explained in the answer above. Why standard CL fails is not solely due to sub-optimal augmentation but is a fundamental limitation of standard CL methods - **contrasting between $(X_1, X_2)$ pairs can only capture shared mutual information between $X_1$ and $X_2$, but not unique information regardless of how augmentation is performed. Unique information has to be captured via modality-specific signals, alongside the right modality-specific augmentations and factorized representations, which we show in FactorCL.** [Performance improvement] The Table 2 performance increase is not only from better augmentation. Both FactorCL-SUP and Factor-SSL have increased performance, but improved augmentation is only used in the FactorCL-SSL. Performance increases from Table 2 come from our method being able to unique task-relevant information in both supervised and SSL, directly addressing limitation 2 in the introduction. [Low shared and high unique] High unique information may not imply low shared information. There could be cases where both shared information and unique information are high. The MOSEI dataset [83] is an example where modality-unique information (text expressing a joyful event) and modality-shared information (happiness from a delighted voice and a smiling face) are both rich and useful for downstream sentiment/emotion tasks. [Minor comments] We thank the reviewers for the suggestions. We fixed the Figure 2 typo. The Eq. 1 does not have a typo, and $I(X_1; X_2; Y)$ is the correct form of the shared information. We have improved the formatting of Table 2. And yes, Lines 171-174 mean $X_1$ and $X_2$ are concatenated with the encoded features of ($X_1'$ & $X_2'$) [Limitations] We will add more discussions regarding limitations to Appendix: 1. Optimizing better MI lower and upper bounds could improve performance for higher-dimensional and complex modalities. 2. Extending the work of InfoMin [Tian et al.] to automatically generate data augmentations to satisfy the optimal augmentation assumption, and leveraging future progress in multimodal generative models for data augmentation. 3. Quantifying whether shared or unique information is more important for the task to optimize FactorCL better. --- Rebuttal Comment 1.1: Comment: I thank authors for the response. The rebuttal addresses my concerns, given the clarifications are added to the main paper. After reading the rebuttal and other reviews, I keep my score at weak accept.
Summary: This work addressed the problem of contrastive learning in a multimodal setting, particularly in capturing shared and modality-specific information regarding downstream tasks. Existing approaches assume that the information contained in different modalities is somewhat the same (redundant), but in the real world, it might not be the case. The authors proposed to factor the task-specific mutual information into shared and unique components. They provided a derivation for this factorization and their FactorCL algorithm optimized these two parts with approximated upper and lower bounds separately, under an assumption of optimal unimodal augmentation. The authors evaluated against chosen baselines on a collection of tasks in the MultiBnech benchmark and show improvements over these baselines. Strengths: - The authors provided a lot of details about their approaches, from the analysis of CL algorithms, and their approximation to experiment results. - Most of the simplifications look intuitively reasonable - On the chosen benchmark, the gains by their algorithm were decent Weaknesses: - The paper is packed with too many details which makes it hard to follow the main idea  - On the experiment results, for example in Table 2, even though there were improvements over baselines, the results were too far below those reported in previous works, including the original MultiBench paper (https://arxiv.org/pdf/2107.07502.pdf - also Table 2). This casts doubt on the validity of the approach of this paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - why don't you conduct experiments on various well-established vision language image-text datasets out there such as MS-COCO or Flickr30K? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and insightful comments! We respond to some concerns below: [Too many details] The details cover the exact mathematical derivation from the definitions of shared and unique task-relevant information to our final self-supervised objectives. We will add an overview of the derivation steps and intermediate objectives and move some details to Appendix for clarity. **We also emphasize that the actual implementation is relatively straightforward to adapt to a standalone contrastive framework (Algorithms 2 and 3). Code is included in the supplementary, and we will release it publicly so that all implementation details are fully transparent and reproducible.** [Performance gap] There are 2 key differences between the experiments: 1. Our approach is currently implemented for two modalities as we primarily study two views in our framework, while the best reported MultiBench results are for supervised learning with three modalities. 2. MultiBench and related work design the best modality and task-specific multimodal architectures to really push for the best supervised learning performance (e.g., complex multimodal transformer methods), while we use generic encoders (not necessarily the most complex architectures) and focus on contrastive representation learning objectives, and use linear probing to evaluate performance. This is in line with observations where SSL with generic encoders (e.g., SimCLR) can have lower accuracies than supervised models whose architecture is optimized for supervised performance (e.g., CoAtNet [Dai et al.]) and due to linear probing evaluation (e.g., Chen et al.). During the rebuttal period, we scaled up some experiments to address the differences due to points 1 and 2 and summarize the results in Table 2 in the rebuttal pdf: 1. We use the architecture from the best supervised models, and we first pretrain the model using FactorCL-SSL and then perform supervised fine-tuning to evaluate. The total number of epochs for FactorCL-SSL pretraining equals the total number of epochs for training supervised baselines, with FactorCL-SSL having a few epochs (less than ten) for fine-tuning. We also extended FactorCL to three modalities: we first perform vision + language FactorCL-SSL pretraining and concatenate audio features for supervised fine-tuning. The number of fine-tuning epochs is also less than ten. 2. **Overall, we see better results from Factor-CL (ours) than the supervised baselines we reproduced across different datasets as well as architectures**, suggesting that our method achieves stronger results by capturing unique task-relevant information. There is still a gap between the FactorCL-SSL results and the reported supervised results in Multibench; we attribute this small gap to the fact that the supervised models in Multibench use all three modalities to train from scratch, while we only use the audio modality to fine-tune with a few epochs, so it does not learn all multimodal interactions. Dai et al. Coatnet: Marrying convolution and attention for all data sizes. NeurIPS 2021. Chen et al. Big self-supervised models are strong semi-supervised learners. NeurIPS 2020. [Datasets] These established vision language retrieval benchmarks test only for shared information between images and captions and do not need unique information. **During the rebuttal period, we further added experiments on the NYCaps dataset** [Hessel et al.], a new dataset testing the matching of cartoon images and humorous captions, which requires unique humorous information in images and text. We show strong performance when continuing contrastive training on top of a pre-trained CLIP using SimCLR, SupCon, or FactorCL and evaluating using zero-shot retrieval, **SimCLR has an accuracy of 49.2%, SupCon has an accuracy of 49.7%, and our method outperforms both by achieving 50.5%.** Our paper also includes experiments on IRFL [78], which aims to match images and figurative captions (rather than literal captions), requiring more unique information. From Table 3, we outperform the state-of-the-art in classifying images and figurative captions, outperforming zero-shot, fine-tuned, and continued pre-trained CLIP models. Hessel et al., Do Androids Laugh at Electric Sheep? Humor "Understanding" Benchmarks from The New Yorker Caption Contest. ACL 2023.
Summary: Based on the mutual-information theory, this paper proposes a new multi-modal contrastive learning method (FactorCL) to learn both shared and unique multi-modal task-relevant information, which captures task-relevant information via maximizing MI lower bounds and removing task-irrelevant information via minimizing MI upper bound. The method achieves significant influence in different real-world datasets. Strengths: 1. How to capture both the shared and unique task-relevant information in different modalities is interesting. 2. The theoretical analysis looks solid and cooperates well with empirical results. 3. The writing is well and easy to follow. Especially the figures represent the main ideas and are easy to understand. 4. The empirical results show significant improvements on different types of datasets. Weaknesses: 1. It seems that the unique-augmentation step is strongly related to the task-relevant information of different modalities. For example, when the text is “a yellow flower”, the ColorJitter operation should be removed. And when the text is “a flower”, the ColorJitter operation is a useful data augmentation. Is it possible to design a strategy that can automatically select the appropriate data augmentations? 2. As shown in Table 2, FactorCL-SSL and FactorCL-SUP shows significant difference on some datasets. Especially, on MOSI, SimCLR and Supcon show similar performance while the performance of FactorCL-SSL and FactorCL-SUP have a large gap. Is it possible to provide more discussions about that? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and insightful comments! We respond to some concerns below: [Automatic augmentations] We find that **augmentations that approximately satisfy the optimal multimodal augmentation defined in Eqs.17-18 are sufficient for good performance, which is simpler and straightforward to implement on real-world datasets** than strictly satisfying Eqs.17-18. Our intuition is to avoid augmentations that remove or strongly destroy information shared by the other modality (e.g., the caption) and augment via cropping or color jittering in the image. We will clarify this in the paper as a guideline for practitioners. From Table 3, we outperform independent augmentations and other baselines. It is possible to extend the work of InfoMin [Tian et al.] to generate data augmentations to automatically satisfy the optimal augmentation assumption. Still, these methods require generative models for multiple modalities. We think this is an exciting direction for future work and expect progress in multimodal generative models to yield further advances in these problems. [Discussion on the gap between FactorCL-SSL and FactorCL-SUP] We found that training a generic multimodal model with SSL was difficult due to the small sample size and high-dimensional and temporal challenges of the MOSI dataset. During rebuttal, we adapted the best supervised multimodal architecture for MOSI. We performed CL pre-training on this architecture, which yielded an improved FactorCL-SSL performance of 80%, closer to FactorCL-SUP performance. We include these updated results and comparisons in Table 2 of the attached rebuttal pdf. --- Rebuttal Comment 1.1: Title: reply to the rebuttal Comment: I thank the authors for the responses. I will maintain my original rating.
Summary: This paper presents FACTOR CL, a method for multimodal representation learning that captures both shared and unique task-relevant information, going beyond the common approach of focusing on shared information across different data modalities. FACTOR CL is based on three key contributions: factorizing task-relevant information into shared and unique representations, capturing and discarding task-relevant and irrelevant information through optimizing mutual information bounds, and utilizing multimodal data augmentations to approximate task relevance in the absence of labels. On large-scale real-world datasets, FACTOR CL achieves state-of-the-art results on six benchmarks. The premise of this paper is quite similar to the following work (Chaudhuri et al., Cross-Modal Fusion Distillation for Fine-Grained Sketch-Based Image Retrieval: https://bmvc2022.mpi-inf.mpg.de/0499.pdf), however, the formulation, experimentations are different. Strengths: (1) This method is innovative in its attempt to model not only the shared information but also the unique information between different modalities, which is often overlooked by traditional Contrastive Learning (CL) methods. The framework is applicable to a broad range of settings, not just those where shared information is dominant. This allows it to handle diverse multimodal distributions, even those with substantial unique information. (2) The proposed method utilizes self-supervised data augmentations, which allows it to learn task-relevant information without access to labels. This enables the algorithm to learn in a more unsupervised manner, reducing the need for extensive labeled data. Weaknesses: (1) The method relies on certain assumptions, such as the optimal augmentation assumption, which may not always hold in real-world applications. If these assumptions do not hold, the effectiveness of the method may be significantly compromised. The method's performance might heavily rely on the quality of the data augmentation techniques used, which might not always be easy to decide or optimize. (2) Calculating mutual information can be challenging in practice, especially when dealing with high-dimensional data. Though approximations are used, they may not always accurately capture the true mutual information. (3) The method involves complex formalism and numerous mathematical assumptions, which could make it difficult to implement and adapt to different scenarios. Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) I wonder what are the limitations of current contrastive learning (CL) methods in modeling unique information in the context of multimodal data. (2) It would be nice to clarify how the proposed method distinguishes between shared and unique information across different modalities. What does the 'uniqueness gap' in this context mean? How does it affect the learning process? (3) How does the proposed framework handle situations where there is little to no shared information between modalities, or where most of the shared information is irrelevant to the task? (4) In the derivation of supervised contrastive learning objectives, how do the lower and upper bounds for mutual information terms impact the learning process? (5) What are the practical implications of applying semantic augmentations on each modality in the context of unsupervised contrastive learning? (6) How does the method ensure the learning of task-relevant information without access to labels in the case of unsupervised contrastive learning? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have not adequately addressed the limitations of the proposed method. Potential limitations could be found from my comments listed under the weaknesses and limitations sections. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and insightful comments! We respond to some concerns below: [Assumption] Augmentations that exactly satisfy $I(X_1; X_1') = I(X_1; Y)$ and $I(X_2; X_2'|X_1) = I(X_2; Y|X_1)$ (Eqs.17-18) are hard, so instead we relax it to $I(X_1; X_1') \approx I(X_1; Y)$ and $I(X_2; X_2'|X_1) \approx I(X_2; Y|X_1)$, by using these intuitions to guide augmentation design (details in Appendix D.3 and examples in Table 5): our intuition is to avoid augmentations that will remove or strongly destroy information shared by the other modality (e.g., the caption), and augment via cropping or color jittering in the image. We will clarify this in the paper as a guideline for practitioners. **Our augmentations consistently perform better than independent augmentations (Table 3), suggesting that approximately satisfying this condition is sufficient**. Most importantly, the unique augmentations are simple and scalable to real-world datasets. [Estimator accuracy] Our goal is not to calculate MI precisely but to derive lower and upper MI bounds to design objectives that scale on real-world datasets, making training Factor-CL on large datasets possible. Our work provides opportunities for future work to integrate tighter MI estimators (e.g., Guo et al.) to capture task relevance. Guo et al. Tight mutual information estimation with contrastive fenchel-legendre optimization. NeurIPS 2022. [Complex math] The details cover the exact mathematical derivation from shared and unique task-relevant information definitions to our final self-supervised objectives. For clarity, we will add an overview of the derivation steps and intermediate objectives. **We also emphasize that the actual implementation is relatively straightforward to adapt to a standalone contrastive framework (Algorithms 2 and 3). Code is included in the supplementary, and we will release it publicly so that all implementation details are fully transparent and reproducible.** [Limitations of CL] Theorem 1 shows that current CL methods only keep shared information and discard unique information from modalities. This makes representations of standard CL sub-optimal because task-relevant information from uniqueness is not captured. This is also empirically supported by Figure 1, where the downstream performance degrades consistently as unique information increases, suggesting the sub-optimality of standard CL. Table 2 also shows that standard CL’s failure to capture unique information leads to inferior performances on real-world datasets. These are fundamental limitations of CL methods that require new learning paradigms to solve. [Shared and unique info, uniqueness gap] We define shared information as $I(X_1; X_2; Y)$, unique information as $I(X_1; Y | X_2)$ and $I(X_2; Y | X_1)$, and uniqueness gap as $I(X_1, X_2; Y) - I(Z_1, Z_2; Y)$. The uniqueness gap measures the difference of task-relevant information between input ($X_1, X_2$) and encoded representation ($Z_1, Z_2$). Standard CL will have this gap because current CL methods only keep shared information and discard unique information. In practice, datasets such as IFRL in Table 3 may have high task-relevant unique information, which may widen this gap in standard CL, as discussed in Lines 36-39. In the proposed Factor-CL, the optimal representations will close this gap by capturing both shared and unique info, as shown in Theorem 3. [No shared or shared is task-irrelevant] Our method handles both of these cases: 1. If there is little shared information, our method will still capture the shared information by the lower bound in Eq. 8 and unique information by the lower bounds in Eq 9. 2. If most shared information is irrelevant - our method will discard the irrelevant shared information by the upper bound in Eq. 9 and keep the relevant shared information by the lower bound in Eq. 8. [How bounds impact learning] Our derived lower and upper bounds approximate the actual shared and unique information regions intractable to compute exactly. Our bounds are closer to true MI than existing ones (NCE and CLUB), as shown in Figure 3. Optimizing these tight lower and upper bounds enables us to learn representations to capture different information regions efficiently. Factorization is also important - each representation optimizes different lower or upper bound objectives, so we can capture the correct information to satisfy multiple terms simultaneously. This makes training easier and performance better (Table 2 and Lines 287-289). [Practical implications] Besides the implications for multimodal augmentation (Lines 189-196), we discuss augmentation intuitions for each individual modality. For vision, if downstream tasks are object-oriented, the implication is to augment task-irrelevant parts only (e.g., non-object semantic parts if the task is object-centric) or to avoid image augmentations that hugely remove information (e.g., cropping and greyscale). For text, we should try to avoid masking large chunks of text since this can hugely remove information. We want to emphasize that these choices are scalable and much less expensive than labeling data. [How to ensure learning task-relevant information without labels] We discuss this in section 3.2, specifically definitions 2 and 3. Essentially we design suitable augmentations of each modality, extending the work in [62], and replace Y with X’ (augmented data view) to enable the learning of task-relevant information without access to labels. The details can be found in Lines 157-174. [Limitations] We will add more discussions regarding limitations: 1. Optimizing better MI lower and upper bounds for higher-dimensional and complex modalities. 2. Extending the work of InfoMin [Tian et al.] to automatically generate data augmentations to satisfy the optimal augmentation assumption, and leveraging progress in generative models for data augmentation. 3. Quantifying shared and unique information to optimize FactorCL better. --- Rebuttal Comment 1.1: Title: Rebuttal response Comment: I thank the authors for the for submitting the rebuttal. However, the optimal augmentation assumption is not getting clear, where Appendix D.3 is not helping much. Furthermore, different parts of the paper are quite complex to understand. So, I am leaning to keep my original rating at the moment. --- Reply to Comment 1.1.1: Title: Clarification on the optimal augmentation assumption Comment: We thank the reviewer for the kind response, we would like to further clarify the optimal augmentation assumption with the example in Figure 4 of the paper: – $x_1$ = Image: a car speeding on the highway. – $x_2$ = Figurative caption: “The car is as fast as a cheetah.” – $y$ = 1, indicating the figurative description and image is a match. ***Task-relevant information:*** Shared info: car speeding / the car is fast. Unique info in $x_1$: highway. Unique info in $x_2$: cheetah. ***Independent augmentation (Tian et al.; Eq. 17 in our text):*** – Task-relevant info in $x_1$: car, speeding, highway. To augment the image such that $I(x_1, x_1’) = I(x_1, y)$, we randomly remove image pixels that are not in the car, the speeding lines, or the highway. – Task-relevant info in $x_2$: car, fast, cheetah. To augment the caption such that $I(x_2, x_2’) = I(x_2, y)$, we randomly mask words except for the words: car, fast, and cheetah. ***Optimal unique augmentation (ours, Eq. 17 and 18 in our text):*** — Task-relevant unique info in $x_1$, $I(x_1,y|x_2)=$ highway. To augment the image such that $I(x_1, y|x_2) = I(x_1, x_1’|x_2)$, we only augment image pixels that are not the highway. — Task-relevant unique info in $x_2$, $I(x_2, y | x_1)=$ cheetah. To augment the caption such that $I(x_2, y | x_1) = I(x_2, x_2’ | x_1)$, we randomly mask words except for the word cheetah. While these might not be easy to satisfy exactly, we find empirical solutions that work well to approximate the optimal unique augmentation: ***Approximate unique augmentation*** To approximately augment the image such that $I(x_1, y | x_2) \approx I(x_1, x_1’ | x_2)$, we apply simple rotation augmentations to make the highway pixels remain intact. To approximately augment the caption such that $I(x_2, y | x_1) \approx I(x_2, x_2’ | x_1)$, we simply mask or replace non-object words to make the word cheetah remains intact. Again the feedback is truly appreciated, and please let us know if you have further questions.
Rebuttal 1: Rebuttal: Dear reviewers, we are extremely grateful for your valuable feedback and insightful comments. We are glad that you agree that our results are innovative (859W), original (fNpT), significant (fNpT), and applicable to a broad range of settings in contrastive learning (859W, bfCW, N8Dx, 4hLy). Your concrete suggestions are a valuable step in this direction, and we have revised our submission accordingly to take these into account. In this short note, we summarize the main changes we made to our submission: 1. Figure 1 to clarify the conditioning process of conditional Info-NCE question raised by Reviewer fNpT. 2. Table 1 to verify the InfoMin assumption (Tian et al.) used in Theorem 1 empirically, showing that Theorem 1 holds in practice - standard CL methods struggle to learn unique information. 3. Table 2 shows FactorCL (ours) further improves performance by using the same modalities and architectures as the supervised baselines from Multibench [37], and FactorCL outperforms the reproduced supervised methods. 4. In Table 3, we also added experiments on NYCaps, a new dataset that requires unique information in cartoon images and humorous captions, and we show FactorCL outperforms SimCLR and SupCon. 5. Even though the optimal augmentations are hard to satisfy, in practice, we use these intuitions to guide augmentation design, yielding state-of-the-art results and scales to real-world datasets. Our intuition is to avoid augmentations that remove or strongly destroy information shared by the other modality (e.g., the caption) and instead augment via cropping or color jittering in the image. We will clarify this in the paper as a guideline for practitioners. 6. Finally, we would like to emphasize that all code and data are in the supplementary material, and we plan to release all code and data on GitHub after the review period so details are transparent and reproducible. We have added all requested details regarding the method and experiments to the appendix. Pdf: /pdf/17a51b9a2ce0158b0e64cada2c3dbb2a67e6788c.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The goal of the proposed work is control over the information content of representations. The most common form of contrastive learning leverages multi-view redundancy, where data points are paired across different modalities or different augmentations in the same modality, and a contrastive loss is used to extract the redundant (or shared) information across the pairs. The authors propose a methodology to extract the shared information and more---the information that is unique to each modality and still relevant to the task---with each component of the information in its own representation. While contrastive learning based on multi-view redundancy involves maximizing a lower bound on the mutual information (e.g., InfoNCE), the proposed method requires multiple information terms, including the minimization of certain ones, for which the authors propose an efficient scheme of upper and lower bounds. For scenarios where the task is not given, the task-relevant unique information is defined through a principled augmentation scheme. The proposed method is benchmarked on a synthetic experiment, where the information content in each variable can be controlled, and then on several multi-modality datasets, where the goal is simply test set performance. Strengths: The proposed work is well-motivated. Control over the specific information content of learned representations is a valuable research pursuit, and the information theoretic analysis is reasonable. The practicality of the methodology is an important strength: standard contrastive learning losses are all one needs, and the MI upper bounds are a clever re-utilization of critics trained concurrently for MI lower bounds. The strategy of principled data augmentation as a replacement for task information is interesting. Overall, the ideas pursued in this work have substantive originality and significance. Weaknesses: While the ideas pursued in this work are good, there are some issues that should be remedied before publication. - Most importantly, there doesn’t seem to be any support for the way in which the proposed work estimates conditional MI by concatenating the conditional variable(s) to the critic inputs. This is a significant issue, as both objectives in the information decomposition rely on estimation of the conditional mutual information. The details are not given (The Appendix refers to a nonexistent section: “In this work, we implement the conditioning in $p(x_1, x_2∣x_1^\prime, x_2^\prime)$ through concatenation and the details are in Appendix Sec.” (Appendix, L720). Instead there is a seemingly irrelevant discussion of a kernel-based alternative to what is actually done. - As currently written, Theorem 1 is incorrect, as shown by a simple counterexample. The identity transform ($Z_1=X_1$, $Z_2=X_2$) “perfectly maximizes” eqn 2, and gives $I(Z_1,Z_2;Y) = I(X_1,X_2;Y)$. The proof in the Appendix depends on the InfoMin proposition from Tian et al (2020), that posits good representations also minimize $I(Z_1;Y|Z_2)$. This is not the same as saying that normal CL approaches necessarily minimize $I(Z_1;Y|Z_2)$. The identity transform also optimizes Eqn 7, the objectives defining the unique representations, suggesting something else is needed in the objective. - The idealized augmentation strategy is a nice return to the self-supervised setting, but it feels too far removed from reality. L167: "In the case of image classification, task-relevant information is the object in the picture, while task-irrelevant information is the background." No, the task relevant information is only a couple of bits, and $I(X;X^\prime)=I(X;Y)$ is not achieved by only swapping out the background. An augmentation that would achieve this extremely high bar would swap the object for another of its class, changing the image entirely. Perhaps another way to see it: every pixel pertaining to the car in Fig 4 could be jittered, or the structure of the car could be changed quite significantly while still linking it to the original image X, and this represents a ton of information $I(X;X^\prime)$ for any commonly used augmentation, far exceeding $I(X;Y) \le H(Y)$. While motivation by an ideal augmentation can be helpful, I think the section would benefit from a more sober take on the implications of realistic augmentations on the proposed information decomposition. - Important details seem to be missing. - What specifically is Cross+Self and Cross+Self+Fact? It is referred to in vague terms, as a “category” (L219), “capturing a range of methods” (L218), and Cross+Self+Fact “is approximately done in prior work” (L221). - I like the setup of the synthetic experiments, but they are opaque as well. Without more information, the accuracies of Fig. 1 mean little. All methods achieve better than 90% for all settings, even SimCLR when all information is unique? The text (main and appendix) is vague: “The label $y$ is generated as a function (with nonlinearity and noise) of varying ratios of $w_s$, $w_1$, and $w_2$ to represent shared and unique task-relevant information” (L239 and 874). After searching the attached code, the label $Y$ is created with a threshold on a sigmoid of the average of vector components, which is no different than a simple hyperplane -- no nonlinearity and no noise. With more clarity, the synthetic results could be a much stronger inclusion in the current work. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - In theory, the interaction information (“task relevant shared information”) $I(X_1;X_2;Y)$ can be negative. Are there any interesting consequences of such a scenario in the proposed method? - Is Definition 1 expressing what is intended? I imagine the intent is to express $I(X_1;Y|X_2)>0$, but it does not seem correct. - What is RUS? It is all over the attached code, without any reference in the text nor any comments in the code. RUSModel, RUSAugment, RUS_CIFAR10, etc. Minor points: - There is a $1/K$ missing in the denominator of Eqn. 10 - Typo in Fig 1, both input variables are labeled $X_1$ - Typo in Line 113, the unique information is written as the shared info - Fig 2: it is confusing that the argmax LHS is replaced with information bounds on the RHS, and that $Y$ is replaced by $X_1^`$ and $X_2^`$ without explanation and without removing $Y$ from the Venn diagrams on the RHS - Some mutual information quantities seem to be in bits (Table 1) and others in nats (Fig 3), though it’s rarely stated. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Yes, limitations were addressed in Appendix A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: [Conditional MI] We apologize for the incomplete reference in Appendix and have fixed it. This conditioning scheme is briefly stated in Lines 171-172 and elaborated in Lines 826-830 for supervised and Lines 834-837 for SSL. Conditioning is done by concatenating the encoded $X_1$ and encoded $X_1’$, concatenating encoded $X_2$ and encoded $X_2’$, and feeding the two vectors into $I_{NCE}$ and $I_{NCE-CLUB}$ - see Figure 1 in attached rebuttal pdf. Conditioning by concatenating is commonly used, for example, in: Mirza and Osindero, Conditional Generative Adversarial Nets. 2014. Reed et al., Generative Adversarial Text to Image Synthesis. ICML 2016. Rombach et al., High-Resolution Image Synthesis with Latent Diffusion Models. CVPR 2022. [Theorem 1] **We would like to clarify three assumptions referred to in this paper, which have subtle differences (using $I(X_1; Y | X_2) $ without loss of generality):** 1. Multi-view redundancy (about data): $I(X_1; Y | X_2) \le \epsilon$, used by standard CL but **not** by Theorem 1, states that the task-relevant information from unique part is minimal ($\le \epsilon$); 2. Non-negative unique information (about data): $I(X_1; Y | X_2) > 0$, used by Theorem 1 (Line 94), states that task-relevant information from unique part is nonzero; 3. InfoMin Tian et al. (about representation): $I(Z_1; Y | X_2) = 0$, used by Theorem 1 (Lines 645-649), states that the optimal representation $Z_1$ learns task-relevant information only from the shared part. We will clarify these in the main text. **Theorem 1 is correct under Assumptions 2 and 3.** Under Assumptions 2 and 3, $Z_1=X_1, Z_2=X_2$ is not possible for Theorem 1, as $Z_1$ and $Z_2$ only capture the shared part from Assumption 3, but $X_1$ and $X_2$ contain task-relevant information from the unique part from Assumption 2, which cannot be captured by $Z_1$. **We also check Assumptions 2 and 3 empirically**. Table 1 and 4 in the main text show that Assumption 2 holds empirically. To verify Assumption 3, we use the synthetic dataset as Table 1 and measure $I(Z_1; Y | X_2)$ in standard CL (SimCLR). We get $I(X_1; X_2)=12.29$ and $I(Z_1; Y | X_2)=0.4$ (see Table 1 in the rebuttal pdf). $I(Z_1; Y | X_2)$ is much smaller and closer to zero than $I(X_1; X_2)$, indicating that both Assumptions are reasonable and Theorem 1 holds in practice. Nevertheless, if the InfoMin assumption does not hold, we get $I(Z_1, Z_2; Y) \leq I(X_1, X_2; Y)$, with the equality satisfied only if $Z_1=X_1$ and $Z_2=X_2$. We will add the equality case to our results. Since identity transformation is nearly impossible in real-world datasets, Theorem 1 still holds for empirical scenarios. Identity transformation also optimizes the uniqueness in Eq. 7: this does not violate any of our assumptions and results but does not yield ideal representations for each term (Lines 124-129). Empirically, through the term $-I_{NCE-CLUB}(X_1; X_2)$ in Eq. 9, the loss will remove the shared part in $Z_{U_1}$ and $Z_{U_2}$, making $Z_{U_1} = X_1$ and $Z_{U_2} = X_2$ practically very unlikely. [Augmentation] Below, we discuss the implications of three types of augmentations: 1. Augmentations that exactly satisfy $I(X_1; X_1') = I(X_1; Y)$ and $I(X_2; X_2'|X_1) = I(X_2; Y|X_1)$ (Eqs.17-18); exactly satisfying these conditions is hard, but empirically we do not exactly require this assumption - instead we relax it to: 2. Augmentations that approximately satisfy $I(X_1; X_1') \approx I(X_1; Y)$ and $I(X_2; X_2'|X_1) \approx I(X_2; Y|X_1)$: our intuition is to avoid augmentations that will remove or strongly destroy information shared by the other modality (e.g., the caption), and augment via cropping or color jittering in the image. We will clarify this in the paper as a guideline for practitioners. These **consistently outperform independent augmentations (Table 3), suggesting that approximately satisfying this condition is sufficient**. 3. Augmentations that do not consider this condition at all (i.e., independent modality augmentations as done in prior work): this has worse performance. [Cross+self] Cross + Self refers to all self-supervised methods that learn one representation trained jointly for two objectives: cross-modal contrastive loss (e.g., image-text contrastive) plus unimodal contrastive (e.g., vision only contrastive from two image views) [e.g., 27,29,56]. Cross+Self+Fact includes self-supervised methods with separate representations, one trained for cross-modal contrastive objectives and another trained for unimodal contrastive objectives, such as [76,79]. [synthetic exps] The data is generated using multivariate Gaussians with fixed means and variances, which add noise and randomness. The label is a linear function of the latent variables - we found this setup to display the most evident trends. We also tried non-linear data with noise but observed that all methods fluctuate more since we only do linear probing on the final representation. We refer the reader to our experiments on real-world datasets (Table 2) for comparisons with non-linear labels and noise. [Negative interactive information] Interaction information is: 1. Positive when there is more task-relevant shared info than irrelevant, so FactorCL has to do more work in **capturing** task-relevant information (via lower bound). 2. Negative when there is more task-irrelevant shared info than relevant, so FactorCL has to do more work in **removing** task-irrelevant information (via upper bound). These insights also give a better idea of the weights assigned to learning from these two objectives. [RUS] RUS was the initial acronym we used, standing for redundant, unique, and synergistic interactions that we aimed to learn via factorized learning. [Typos and bits vs. nats] We have corrected the term and added notes to distinguish when we use nats or bits. [Figure] We have fixed the typo in Figures 1 and 2 (used $X’$ instead of $Y$) and added them to the rebuttal PDF. --- Rebuttal Comment 1.1: Comment: I am worried that there remains no support for the conditional mutual information estimation that forms such an important part of the proposed method. The three references proposed in the rebuttal are emphatically $\textbf{not}$ estimating conditional mutual information. Concatenation opens the function space to include distributions conditioned on the concatenated variable, but it does not prescribe a conditional distribution. Consider the schematic in the rebuttal pdf. InfoNCE lower bounds the mutual information between the random variables serving as its two inputs. The two inputs are two new random variables formed by the concatenation -- call $\tilde{Z}_1 \sim p(Z_1, Z_1^\prime)$ and similarly for $\tilde{Z}_2$. InfoNCE provides a lower bound on $I(\tilde{Z}_1;\tilde{Z}_2)$. Phrased another way, how can InfoNCE know what part of the concatenation to condition on? Conditional mutual information is not trivial to estimate; see for example "CCMI : Classifier based Conditional Mutual Information Estimation", Mukherjee et al. 2020. If the authors can resolve this issue or point out a misunderstanding on my end, I am happy to raise my score. Otherwise I am leaning more towards reject than before, because the information theoretic basis of this work is called into question. Another thing: the identity transform is just a counterexample. It represents any invertible transformation, meaning all information is preserved. This is far more common than the authors suggest in the rebuttal ("Since identity transformation is nearly impossible in real-world datasets, Theorem 1 still holds for empirical scenarios.") -- in general, a point transform between two continuous spaces will preserve all information. See e.g. "On the information bottleneck theory of deep learning" (Saxe et al. 2018) for a nice discussion around this point. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the comment, we would like to clarify our estimation of Conditional MI (CMI): Consider the CMI term $I(X_1, X_2 | C)$, where $C$ is a conditioning variable. **Conditional InfoNCE (our Eq. 15) is proved to be a lower bound of CMI in Sordoni et al., Proposition 1**: $$I_\text{NCE}(X_1; X_2|C) = \mathbb{E}_{p(C)} \left[ \mathbb{E} \left[ \log \frac{\exp \{\phi(x_1,x_2^+, c)\}}{\frac{1}{k} \sum_k \exp \{\phi(x_1, x_2^-, c)\}} \right] \right],$$ where the inner expectation is taken w.r.t $\left(x_1,x_2^+ \sim p(x_1,x_2|c), x_2^- \sim p(x_2|c)\right)$. It has two key differences from InfoNCE: 1. Positive pairs $(x_1,x_2)$ are sampled from $p(x_1,x_2|c)$, and negative pairs $(x_1,x_2^-)$ are sampled from $p(x_2|c)$. 2. The critic takes the form $\phi(x1,x2,c)$, which Sordoni et al. implement as $f([x_1,c])^\top g(x_2)$ for trainable encoders $f()$ and $g()$, $[x_1,c]$ denotes concatenation (Sordoni et al. equation 36). We implement our method in the same way, with conditioning variable $C=(X_1’, X_2’)$ as augmented modalities: 1. Positive pairs $(x_1,x_2)$ are sampled from $p(x_1,x_2|x_1’,x_2’)$, this is effectively the original modality pair $(x_1,x_2)$ and their augmentations $(x_1’,x_2’)$. 2. Sampling negative pairs $(x_1,x_2^-)$ from $p(x_2|x_1’, x_2’)$ is nontrivial, especially when $x_1’, x_2’$ are continuous; Sordoni's boosted critic estimation (their section 4.3) addresses this issue, yielding an empirically accurate CMI estimator. Since we focus on scaling our method to real-world datasets, we use this for simplicity (Sordoni also discuss variational approximations and importance sampling as alternative methods). 3. Our critic function takes the form $\phi(x_1,x_2,x_1’,x_2’)$, which we implement as $f([x_1,x_1’])^\top g(x_2,x_2’)$ for trainable encoders $f()$ and $g()$, $[x_1,x_1’]$ and $[x_2,x_2]$ denote concatenation, and $f()$ and $g()$ are specific to modalities 1 and 2 respectively, again justified by Sordoni. Essentially, the critic knows which variable $C=c$ is being conditioned on because $c$ stays constant while $x_1,x_2$ change. For every $c \in C$, the model implicitly learns a separate InfoNCE, call it InfoNCE($c$) to lower bound $I(X_1; X_2|C=c)$, since $c$ is now a (concatenated) input to InfoNCE. We have added a detailed discussion and reference to Sordoni in the paper. Conditional InfoNCE-CLUB (Eq. 16) can be similarly shown to upper bound CMI, by extending Theorem 5 in Appendix C.1 (InfoNCE-CLUB >= MI) with Proposition 1 in Sordoni et al. (plugging in the optimal critics for Conditional InfoNCE). **We also ran new experiments verifying that Conditional InfoNCE $<=$ CMI $<=$ Conditional InfoNCE-CLUB**. Following the setup of Mukherjee et al., with our estimators and true CMI estimated on synthetic data with fixing dimension of representation $Z$ and varying samples $n$, and fixing samples $n$ and varying $d_z$. We used our implementations of conditional InfoNCE (Eq.13) and conditional InfoNCE-CLUB (Eq. 14). Results are: | Number of samples ($* 10^3$), fix $d_z$=20 | 5 | 10 | 20 | 50 | | ----------------- | ---- | ---- | ---- | ---- | | CCMI (MI-Diff + Classifier) | 2.03 | 2.06 | 2.15 | 2.20 | | Conditional InfoNCE (our lower bound) | 2.19 | 2.20 | 2.20 | 2.20 | | Conditional InfoNCE-CLUB (our upper bound) | 3.45 | 3.53 | 2.98 | 2.86 | | True CMI | 2.32 | 2.32 | 2.32 | 2.32 | | Dimension $d_z$, fix $n=20*10^3$ | 1 | 10 | 20 | 50 | 100 | | ------------------ | ---- | ---- | ---- | ---- | ---- | | CCMI (MI-Diff + Classifier) | 2.30 | 2.18 | 2.15 | 1.98 | 1.67 | | Conditional InfoNCE (our lower bound) | 2.18 | 2.20 | 2.20 | 2.26 | 2.30 | | Conditional InfoNCE-CLUB (our upper bound) | 3.70 | 2.95 | 2.98 | 2.79 | 2.86 | | True CMI | 2.32 | 2.32 | 2.32 | 2.32 | 2.32 | **Our estimators satisfy Conditional InfoNCE $<=$ CMI $<=$ Conditional InfoNCE-CLUB, and are comparable to Mukherjee et al.’s estimator, suggesting that our method yields valid and competitive lower and upper bounds for CMI.** Finally, we clarified in the paper that we learn representations by optimizing lower and upper bounds of CMI, and acknowledge that exact CMI estimation is difficult (referencing Sordoni et al., Mukherjee et al., Molavipour et al.) [Identity transformation] We agree that an invertible transformation may exist, nevertheless, in contrastive learning, **representation $Z$ is often lower dimensional than high-dimensional raw data $X$ (images/videos)**, making identity and invertible transformations impossible. This implies that Theorem 1 still holds for empirical scenarios, we have also added a discussion of Saxe et al. Thank you again for your extremely insightful feedback. Sordoni et al. https://arxiv.org/abs/2106.13401 Mukherjee et al. https://arxiv.org/abs/1906.01824 Molavipour et al. https://arxiv.org/abs/2006.07225
null
null
null
null
null
null
Decompose a Task into Generalizable Subtasks in Multi-Agent Reinforcement Learning
Accept (poster)
Summary: The paper proposes a new neural network architecture for representing agent policies in multi-agent reinforcement learning. The architecture consists of two parts: (i) a subtask encoder that chooses the subtask to perform based on the subtask-observation history and (ii) a subtask semantics module that chooses the action based on the current observation and the output of the subtask encoder. The user specifies the number of subtasks as a hyperparameter and both modules are trained end-to-end. Experimental evaluation shows that this architecture enables SOTA performance in single-task as well as multi-task settings. Furthermore, the approach enables training policies that generalize to new scenarios in zero-shot and finetuning settings. A qualitative analysis shows that the subtasks learned can be meaningful and interpretable. Strengths: 1. The idea of splitting the policy into two modules to decompose the overall task into subtasks is interesting and seems to work pretty well in the StarCraft environment. The ability to discover interpretable subtasks is an important benefit of the approach. 2. Zero-shot generalization abilities are improved by this architecture. The learned policy generalizes reasonably well to new scenarios obtained by modifying the original task. 3. The overall architecture is logical and appears easy to implement. Weaknesses: 1. The work seems somewhat incremental and I am unsure if there is sufficient reason to believe the generality of the approach w.r.t. different MARL applications and scenarios. The results are purely empirical and it would strengthen the paper to consider another environment different from the StarCraft domain studied in this paper. 2. Some design choices are not fully justified. For example, hiding the action history from the subtask encoder is mentioned as a reason to enable cross-task generalization. However, the observation history might contain information about the action history in some environments and it is unclear if hiding the action history in such a case will make a difference. In fact, I believe it is important to include an ablation in which the action history is also given to the subtask encoder to show that the proposed method is better. 3. The architecture seems to use standard techniques and it would really improve the paper to explain the difference w.r.t. prior work and what is novel about this particular architecture---e.g., how it differs from the baselines and what parts of the architecture are borrowed from prior work. 4. The decomposition of the task into subtasks can also be achieved using a hierarchical approach such as the one in [1]. I believe that it is important to compare the empirical performance of the approach presented in this paper with that of a hierarchical approach. [1] Yang, Jiachen, Igor Borovikov, and Hongyuan Zha. "Hierarchical Cooperative Multi-Agent Reinforcement Learning with Skill Discovery." International Conference on Autonomous Agents and Multi-Agent Systems. 2020. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. What parts of the architecture are novel and how does it differ from prior work? 2. Is there any prior work that tries to decompose the policy into a subtask encoder and a subtask semantics module, either in the cooperative multi-agent setting or the single-agent setting? 3. Would it make sense to consider freezing the subtask semantics module (after training on multiple tasks) during finetuning to perform new tasks that can be accomplished using the same subtasks? This might demonstrate that the learned subtasks generalize to completely new tasks which may require performing the subtasks in a different order. 4. Is the architecture trainable in a hierarchical way---i.e., train the subtask encoder at a higher level (grouping transitions in some way) than the subtask semantics module? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Limitations are not explicitly discussed and it would be good to include them---e.g., reliance on the assumption that the action space can be decomposed in a certain way (self-action and actions affecting other agents). Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate your advice on further enhancing this paper. We would like to discuss them one by one and would greatly appreciate any further discussion on the matter. 1. > **Weakness 1**: Experiments on another environment are needed We conducted zero-shot experiments on the physical deception task (Spread) in the multi-agent particle world environments (MPE). The experimental results can be seen in "**Enhancement 2**" of our global response. It can be seen that DT2GS outperforms UPDeT and ASN_G in terms of zero-shot generalization capability, achieving an average episode reward surpass of about 17.05 and 44.32, respectively. 2. > **Weakness 2**: It's unclear whether hiding the action history can enable cross-task generalization or not. An ablation is needed. Perhaps the reviewer misunderstood our approach. Actually, rather than just hiding action history, replacing action history with subtask history is one of the reasons for achieving cross-task generalization. Besides, since different tasks have different action spaces, giving the action history to the subtask encoder will also make DT2GS non-generalizable in terms of model structure. And replacing action history with subtask history not only preserves the agent's abstract (high-level) behavior history but also enables DT2GS generalizable across tasks in terms of model structure. 3. > **Weakness 3 && Question 1**: Which parts of DT2GS are borrowed from prior work and which parts are new? To adapt the model structurally to varying observation/action dimensions, DT2GS adopted the population-invariant network (PIN) structure developed in UPDeT, where the proposal of PIN was also promoted by the network design of ASN. The innovations in our model architecture mainly include two parts: (a) Designed a subtask encoder from the perspective of cross-task generalization by a structure-designed approach. (b) Constructed an Adaptive Subtask Semantics module, which was not proposed by any previous work, endowing subtasks with consistent yet scalable semantics across tasks to improve model's generalization capability. 4. > **Weakness 4**: Lack of empirical comparison with HSD. HSD is proposed to discover complementary skills (subtasks) in MARL, so it is closely related to DT2GS. Therefore, we added HSD[1] into our reference and as one of the baselines in our experiments. The specific results can be seen in "**Enhancement 1**" of our global response. It can be observed that DT2GS significantly outperforms HSD in terms of asymptotic performance. 5. > **Question 2**: Is there any prior work that tries to decompose the policy into a subtask encoder and a subtask semantics module? DT2GS is the first work that endows subtasks with consistent yet scalable semantics across tasks by the Adaptive Subtask Semantics module. There are several works containing a subtask encoder in its model to construct subtasks/skills/roles/options [1,2,3,4,5]. However, apart from ODIS[1], none of these works designed a subtask encoder from the perspective of cross-task generalization. And different from ODIS which endows the subtask encoder with generalization by a data-driven approach, DT2GS endows the subtask encoder with generalization by a structure-designed approach. > [1] Zhang et al., "Discovering generalizable multi-agent coordination skills from multi-task offline data", ICLR 2022. > > [2] Yang et al., "LDSA: Learning dynamic subtask assignment in cooperative multi-agent reinforcement learning", NeurIPS 2022. > > [3] Wang et al., "RODE: Learning roles to decompose multi-age", ICLR 2021. > > [4] Wang et al., "Roma: Multi-agent reinforcement learning with emergent", ICML 2020. > > [5] Yang et al., "Hierarchical cooperative multi-agent reinforcement learning with skill discovery", AAMAS 2020. 6. > **Question 3**: Conduct transfer under the setup of freezing the subtask encoder. Actually, the zero-shot generalization experiment we conducted froze both the subtask encoder and subtask semantics module, which is more stringent than the experimental setup of just freezing the subtask encoder. Therefore, the zero-shot generalization experiments have demonstrated that the learned subtasks generalize well to completely new tasks which may require performing the subtasks in a different order. 7. > **Question 4**: Is DT2GS trainable in a hierarchical way? Yes, we can train the subtask encoder at a higher level than the subtask semantics module. But hierarchically training the model is more commonly used in the setup of sparse reward. With intrinsic rewards, such training way is helpful for temporal credit assignments. 8. > **Limitations**: Assumption that the action space can be decomposed in a certain way We added this limitation in our revised paper. In the future, we will improve DT2GS in ways like automatically decomposing the action space to alleviate this limitation. --- Rebuttal Comment 1.1: Title: Sincerely invite for further discussion. Comment: Dear reviewer, as the discussion stage is coming to a close, we kindly await your comments and suggestions. We hope that our responses and additional experiments conducted during the rebuttal period have addressed all your questions or concerns. Do you have any further concerns or suggestions you would like to raise? We are eager to engage in a productive conversation with you.
Summary: This paper focuses on transfer learning of multi-agent reinforcement learning (MARL), by establishing generalizable sub-tasks to enable knowledge reuse. Empirical evidence underscores that the proposed algorithm demonstrates robust zero-shot generalization across a variety of tasks. Furthermore, the algorithm showcases ample transferability and surpasses contemporary methods in both multi-task and single-task problem areas. Strengths: (a) The design of the algorithm is presented convincingly and with remarkable clarity, providing an adequate level of detail. (b) The paper presents compelling empirical results that display the algorithm's leading-edge performance in both multi-task and single-task scenarios. The case study outlined in Section 4.3 greatly aids in illuminating its superior capabilities. (c) The exploration of transferable MARL represents a pivotal research area with substantial potential for practical applications. Weaknesses: (a) The paper seems to omit the related work section, which could be pivotal in comprehending the originality and contributions of this study. (b) As it stands, the algorithm's generalization capacity is restricted to target tasks that can be broken down into a series of task-independent subtasks derived from source tasks. If the tasks are more diverse, it would be beneficial to consider task-specific subtasks as well. Technical Quality: 3 good Clarity: 3 good Questions for Authors: (a) The paper seems to address the same research problem as reference [36]. It would be beneficial to clarify your unique contributions and innovations in comparison. (b) For zero-shot generalization to be applicable across multiple tasks, is it necessary for the tasks to be related or to follow the same distribution? If so, it's crucial to articulate this clearly. (c) An overview of studies on multi-task skill-based (also known as hierarchical) MARL should be provided. More broadly, it would be useful to include work on multi-agent option (also known as skill) discovery in the related works section. (d) The "Adaptive Subtask Semantics" section appears to heavily rely on the Action Semantics Network (ASN). Providing some background on ASN could aid in comprehension. (e) In the multi-task settings, is the sole difference among tasks the number of entities, for example, shifting from 8m to 10m? (f) Given its close relevance, it would be beneficial to include [36] as one of the baseline comparisons. (New results are not mandatory.) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitation part is not provided. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your positive review, insightful feedback, and constructive comments that help improve the quality of the paper! We are glad to answer your questions and would appreciate any further response. 1. > **Weakness (a)**: The paper seems to omit the related work section, which could be pivotal in comprehending the originality and contributions of this study. > > **Question (c)**: An overview of studies on multi-task skill-based (also known as hierarchical) MARL should be provided. More broadly, it would be useful to include work on multi-agent option (also known as skill) discovery in the related works section. > > **Question (d)**: The "Adaptive Subtask Semantics" section appears to heavily rely on the Action Semantics Network (ASN). Providing some background on ASN could aid in comprehension. We added a section of related work to the appendix of the revised paper. For details, please see "**Weakness 1**" of our global response. 2. > **Weakness (b)**: As it stands, the algorithm's generalization capacity is restricted to target tasks that can be broken down into a series of task-independent subtasks derived from source tasks. If the tasks are more diverse, it would be beneficial to consider task-specific subtasks as well. Enable the MARL model generalizable across more diverse tasks is a big problem. In the future, we will improve DT2GS by considering both task-independent and task-specific subtasks, to adapt to larger distribution shifts between source and target tasks. 3. > **Question (a)**: The paper seems to address the same research problem as reference ODIS[36]. It would be beneficial to clarify your unique contributions and innovations in comparison. > > **Question (f)**: Given its close relevance, it would be beneficial to include [36] as one of the baseline comparisons. (New results are not mandatory.) DT2GS is quite different from ODIS. Specifically: - DT2GS belongs to the online paradigm, while ODIS belongs to the offline paradigm. The different paradigms make the comparison between the two algorithms lack sufficient significance. To demonstrate this, we directly compare the experimental results from the ODIS paper [7] with the experimental results of DT2GS. | Task | DT2GS | UPDeT | UPDeT-l | UPDeT-m | ODIS | | -------- | -------------- | ----------- | ----------- | ----------- | ---------- | | 3m | **100 ± 1.3** | 98.4 ± 4.0 | 71.0 ± 16.6 | 82.8 ± 16.0 | 98.4 ± 2.7 | | 5m_vs_6m | **93.8 ± 7.0** | 50.0 ± 29.5 | 12.1 ± 12.6 | 17.2 ± 28.0 | 53.9 ± 5.1 | As is shown in the table, the results of UPDeT-l, UPDeT-m, and ODIS come from the ODIS paper and are obtained from Expert offline data. We can observe that DT2GS outperforms ODIS in terms of asymptotic performance, especially on the hard task 5m_vs_6m. However, this performance difference cannot be used to prove which algorithm is better, as it is more likely caused by the paradigm to which the algorithms belong. Furthermore, it can be observed that the UPDeT in our paper (the column of UPDeT) is much better than the UPDeT in the ODIS paper (the columns of UPDeT-l and UPDeT-m), which further supports our conclusion. - Different problem orientations. ODIS is designed to discover generalizable subtasks (skills) from multi-task offline data. Although DT2GS can interact with multiple tasks simultaneously, its original intention was to discover task-independent subtasks from a single source task and then generalize them to multiple target tasks. - Different designs of subtask encoder. ODIS endows the subtask encoder with generalization by a data-driven approach, which prevents the subtask encoder from overfitting to some source task through multi-task offline data. Different from ODIS, DT2GS endows the subtask encoder with generalization by a structure-designed approach, which assigns subtask based on the agent's {subtask, entity-observation} history rather than {action, observation} history. But DT2GS does not conflict with multi-task, and even its generalization capability may be further improved due to simultaneous interaction with multi-tasks. - DT2GS further studied the adaptive semantics of the same subtask on different tasks, which is important to further improve the model's generalization capability and was not mentioned in ODIS. The generalizable subtasks should maintain consistent yet scalable semantics across tasks. That is, the same subtask may have different manifestations on different tasks, which can be captured by constructing adaptive semantics of the subtask. 4. > **Question (b)**: For zero-shot generalization to be applicable across multiple tasks, is it necessary for the tasks to be related or to follow the same distribution? If so, it's crucial to articulate this clearly. The zero-shot generalization scenario of DT2GS requires a moderate distribution shift between the source and target task. And we articulated this requirement in our revised paper. 5. > **Question (e)**: In the multi-task settings, is the sole difference among tasks the number of entities, for example, shifting from 8m to 10m? In the 2 multi-task problems, including the *marine*-series tasks ({3m, 8m, 8m_vs_9m, 10m_vs_11m}) and *stalker_zealot*-series tasks ({2s3z, 3s5z, 3s5z_vs_3s6z}), the number of entities varies across tasks. However, we have demonstrated in Figure 1 that DT2GS contains sufficient transferability across tasks, in which the number and type of entities simultaneously shift among tasks. --- Rebuttal Comment 1.1: Comment: I appreciate the effort made by the authors for this rebuttal. Most of my concerns are resolved. I would maintain my score at this stage.
Summary: This work proposes DT2GS (Decompose a Task inTO a series of Generalizable Subtasks) that addresses multi-agent reinforcement learning in the contexts of zero-shot generalization, transfer, and multi-task. DT2GS learns task-independent subtasks that are characterized by the effects of each agent on itself and other agents when solving the subtask. The proposed method outperforms baselines in SMAC environment in zero-shot, transfer, and multi-task scenarios. Strengths: **S1. Intuitive and motivating formulation of subtasks** The concept of defining a subtask based on the influence an agent on other entities is quite interesting and intuitive. This approach effectively translates into the context of multi-agent Reinforcement Learning (RL) problems, much like how the definition of a subtask is grounded on the Markov Decision Process (MDP) dynamics in single-agent RL problems. This parallel offers a refreshing perspective on tackling multi-agent RL scenarios. Also, it is well demonstrated in Figures 5 and 6. **S2. Generalizing MARL** The approach of expanding the scope of multi-agent Reinforcement Learning (RL) to incorporate zero-shot and multitask problems, achieved through the introduction of task-independent subtasks, is notably intriguing. Furthermore, the experiments have been designed to effectively demonstrate these problems. Weaknesses: **W1. Lack of clarity** This paper could significantly benefit from substantial revisions to enhance its overall clarity. - Concerning the technical writing, the manuscript suffers from the frequent use of undefined notations and interchangeable deployment of different notations, which create confusion. There are discrepancies between the notations used in equations and those in figures. For instance, the subscripts n, N, and n_a are inconsistently applied, leading to ambiguity. The varying entity notations, which also diverge from those used in ASN [1], generate unnecessary confusion, despite possibly facilitating the writing process. The clarity is further compromised when it is uncertain whether terms like o_i or o^t_i refer to the raw observation or the entity (see Line 73, Line 86, and Figure 1). - The authors present the detailed definition of the subtasks after these terms have already been utilized in the text. To improve the flow and comprehension, it could be beneficial to relocate the definitions found on Line 143 to a position before Line 96, where they are first employed. - The paper also contains lengthy sentences riddled with grammatical errors (e.g., Line 138 and Line 141), detracting from the readability. - The description of the adaptive subtask semantics alongside the attention mechanism, specifically from Line 156 to Line 163, is notably vague and could use clearer explanations. **W2. Lack of comparisons to previous work** - The absence of a dedicated section for related work hinders the readers' ability to contextualize this study within the broader scope of the relevant literature. It is strongly suggested that the authors include a comprehensive summary of related work. This should highlight the strengths, weaknesses, differences, and improvements offered by the proposed method in comparison to other techniques in the field. Without this context, DT2GS may risk being perceived as merely a combination of ASN (entity) [1] and UPDeT (attention) [2]. - The authors appear to have overlooked the need to reference or empirically compare their work to other related studies that also focus on subtasks in Multi-Agent Reinforcement Learning (MARL). Notably missing comparisons to RoDE [3] and LDSA [4] could provide valuable benchmarks. [1] Wang et al., “Action Semantics Network: Considering the Effects of Actions in Multiagent Systems”, ICLR 2020. [2] Hu et al., “UPDeT: Universal Multi-agent Reinforcement Learning via Policy Decoupling with Transformers,” ICLR 2021. [3] Wang et al., “RODE: Learning Roles to Decompose Multi-Agent Tasks,” ICLR 2021. [4] Yang et al., “LDSA: Learning Dynamic Subtask Assignment in Cooperative Multi-Agent Reinforcement Learning,” NeurIPS 2022. **Acknowledgment Following Rebuttal** The author's rebuttal offered comprehensive revisions to the notations for enhanced clarity. Additionally, the inclusion of pseudocode aids in understanding the core concepts. I believe that implementing these proposed changes will significantly strengthen the work. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: **Q1.** The paper seems to lack a rigorous mathematical formulation that captures the definition of a subtask provided on Line 142. Could the authors clarify this aspect with a suitable formulation? **Q2.** The two-stage self-attention mechanisms utilized in the subtask semantics module do not appear to be immediately intuitive. Could the authors further clarify their ideas from Line 146 to Line 163, particularly focusing on how this structure can prompt the subtask to categorize the impact of the agent on the entities? **Q3.** It's somewhat perplexing why the MLP (Multi-Layer Perceptron) is designated as "similarity", considering it doesn't directly compute the similarities of the embeddings. Could the authors clarify this terminology? **Q4.** It’s unclear how the learned subtask context is actually used by the policy. A pseudocode of the entire process including MAPPO update would be helpful. **Q5.** The depiction of multiple encoders with a single decoder in Figure 1 is rather confusing. Given that it's a decentralized system, it seems unlikely that each agent shares all input embeddings from all encoders. Does the figure represent the encoder and decoder for the n-th agent, where only the n-th embedding is input into the decoder, excluding the other n-1 agents? **Q6.** Is the robustness of DT2GS dependent on the choice of n_k=4? Would the method's performance vary significantly with a different choice of n_k? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 1 poor Contribution: 3 good Limitations: Despite the authors' affirmation in the checklist regarding limitations, neither the main manuscript nor the appendix seems to provide any explicit discussion of the potential limitations of the proposed method. A possible limitation of the proposed work could be its inability to generalize to target tasks with a larger distribution shift, especially those that necessitate the implementation of entirely novel subtasks. For instance, an agent that learns four subtasks during training may struggle when the target task requires the execution of a fifth, previously unseen subtask. Moreover, the computational and memory demands imposed by the subtask encoder and semantics module could also present significant challenges, serving as additional potential limitations of this work Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your advice on further improving this paper. We would like to discuss them one by one. Any further discussion will be appreciated. 1. > **Weakness 1**: Lack of clarity We revised our writing. Specifically : (1) We standardized the use of subscripts in our revised paper as follows: - $n$ -- the number of agents. - $m$ -- the number of entities. - $n_{ally}$ -- the number of allies - $n_{enemy}$ -- the number of enemies In general, the following equation holds: $n=n_{ally}+1, m=n+n_{enemy}$. And we also standardized the use of the following terms to make our paper clearer - $o_i$ -- the raw observation of agent $i$ - $o_i^t$ -- the raw observation of agent $i$ at timestep $t$ (that is, the superscript $t$ denotes timestep) - $o_{i,j}$ -- the entity-observation of agent $i$ on entity $j$ - $o_{i,j}^{t}$ -- the entity-observation of agent $i$ on entity $j$ at timestep $t$ (2) We added the definition of subtask semantics, which refers to the effects of an agent on entities when performing a given subtask, in the first paragraph of Section 3 (around Line 100). (3) We revised the grammatical errors and broke down lengthy sentences into shorter ones that are easier to understand. 2. > **Weakness 2**: Lack of comparisons to previous work (1) We added a section of related work to the appendix of the revised paper. For details, please see "**Weakness 1**" our global response. Besides, It should be emphasized that DT2GS is not a combination of ASN and UPDeT. The innovation of DT2GS includes proposing a scalable subtask encoder and an adaptive subtask semantic module, both of which are not proposed by previous work. (2) We added RODE, ROMA, and HSD into our reference, and added RODE, ROMA, HSD, and LDSA as baselines in our experiments. The results can be seen in "**Enhancement 1**" of our global response. We see that DT2GS outperforms all baselines in terms of asymptotic performance. 3. > **Question 1**: A formula for subtask The mathematical formulation of the subtask is: $k_i^t=\text{Gumbel-Softmax}(h_i^t)$ where $h_i^t$ coming from Formula (4) is the history representation of agent $i$. We added the mathematical formulation of the subtask around Line 132 of our paper to make it more clear. 4. >**Weakness 1 (4) && Question 2**: Clarify for subtask semantics We made the following improvements to make the part of Adaptive Subtask Semantics more clear to read in our revised paper. First, we set the one-hot subtask embedding $z_i^t$ as a formula: $z_i^t=\text{Embedding}(k_i^t)$ Then, we modified Formula (5) as follows: $\phi_i^t=\text{softmax}(\frac{\hat{Q}_i^tK_i^t}{\sqrt{d_K}})V_i^t, \quad \hat{Q}_i^t=W_Qz_i^t$ where $K_i^t = W_{K}[o_{i, 1}^t, o_{i, 2}^t, ..., o_{i, m}^t], \quad V_i^t = W_{V}[o_{i, 1}^t, o_{i, 2}^t, ..., o_{i, m}^t]$ Therefore, we model the adaptive subtask semantics as a weighted sum of all entity-observations to differentiate the impact of the agent on entities. At the same time, we kept Formula (5) unchanged: $\varphi_i^t=\text{softmax}(\frac{{Q}_i^tK_i^t}{\sqrt{d_K}})V_i^t, \quad {Q}_i^t=W_Qo_i^t$ Therefore, the calculation of adaptive subtask semantics $\phi_i^t$ and action semantics $\varphi_i^t$ can be differentiated by its query matrix $\hat{Q}_i^t$ and $Q_i^t$. Besides, we modified the $Q(a_j|o_i^t)$ in Formula (7) to $Q_{value}(a_j|o_i^t)$ to differentiate the action-value function ($Q_{value}$) and the query matrix ($Q_i^t$ or $\hat{Q}_i^t$) in the Attention mechanisms. In addition, we also added $Q_{value}$ and $Pr$ in Figure 3 to facilitate readers in understanding the formulas by referring to Figure 3. 5. > **Question 3**: Using MLP for similarity. In fact, the uses of MLP are diverse. MLP is also a kind of embedding, which can describe similarity to a certain extent, for example, there is a similar usage in [1]. Our experimental results also prove the effectiveness of MLP as similarity to a certain extent. > [1] Zeng, Kuo-Hao, et al. "Moving Forward by Moving Backward: Embedding Action Impact over Action Semantics", ICLR 2022. 6. > **Question 4**: How subtask is used by policy. We believe that the combination of Figure 1 and Figure 3 serves as pseudo code. But to provide readers with a clearer understanding, we also included pseudo code in the revised paper's appendix. 7. > **Question 5**: Multiple encoders with single decoder in Figure 1. Actually, our encoder and decoder can be shared or independent among all agents. Sorry for the inconvenience caused. And we modified Figure 1 in our revised paper to avoid causing such confusion to readers. 8. > **Question 6**: the choice of $n_k=4$ $n_k$ is the number of task-independent subtasks across a series of tasks. As is shown in the ablation experiment we conducted in the appendix, increasing $n_k$ appropriately can enhance the diversity of subtasks and thus capture all task-independent subtasks as much as possible. However, an excessive value of $n_k$ can also affect the efficiency of our method, resulting in a poorer performance. Therefore, it's natural that a reasonable choice of $n_k$ matters to DT2GS. 9. > **Limitations:** Larger distribution shift between source and target tasks. DT2GS can capture all of the task-independent subtasks, which are generalizable across different tasks. Therefore, the generalization scenario of DT2GS is that the distribution shift between the source and target task is moderate. And we added this limitation in our revised paper. Furthermore, expanding the applicability of generalization scenarios and reducing computational complexity are the focal points of future work. --- Rebuttal Comment 1.1: Title: Response to the Rebuttal Comment: I appreciate the author's detailed responses. I believe that the manuscript's clarity will greatly benefit from the upcoming clarifications. With respect to Question 4, some elements of the algorithm's execution remain unclear. Specifically, Figures 1 and 3 are individually complex, making it challenging for me to translate their combination into pseudocode. I would be grateful if the authors could provide detailed pseudocode with revised notations, which would significantly aid comprehension of the algorithm. --- Reply to Comment 1.1.1: Title: Thank you for your reply! Comment: The overall pseudocode of DT2GS based on MAPPO is as follows: **Algorithm 1: DT2GS based on MAPPO** --- ​ **Input:** The parameters $\theta_{En}^{\pi}$ for Scalable Subtask Encoder of actor $\pi$; the parameters $\theta_{De}^{\pi}$ for Adaptive Action Decoder of actor $\pi$; the parameters $\theta^{V}$ for critic $V$; the number $n_k$ for subtasks ​ **Output:** The Scalable Subtask Encoder $\pi_{En}$ and Adaptive Action Decoder $\pi_{De}$ for actor $\pi$; the critic $V$ ​ &ensp;1: &emsp;Initialize $\theta_{En}^{\pi}, \theta_{De}^{\pi}, \theta^{V}, n_k$ ​ &ensp;2: &emsp;Initialize the total timesteps $step_{\text{max}}$ interaction with environment; the total timesteps $T$ of an episode; the number of episodes $batch\\_size$ for each actor/critic update ​ &ensp;3: &emsp;Initialize $step \leftarrow 0$ ​ &ensp;4: &emsp;**while** $step \leq step_{\text{max}}$ ​ &ensp;5: &emsp;&emsp;set data buffer $D$ = \{\} ​ &ensp;6: &emsp;&emsp;**for** $idx=1$ **to** $batch\\_size$ **do** ​ &ensp;7: &emsp;&emsp;&emsp;$\tau=[]$ empty list ​ &ensp;8: &emsp;&emsp;&emsp;initialize actor RNN states $h_{1, \pi}^{0}, ... , h_{n, \pi}^{0}$ for each agent ​ &ensp;9: &emsp;&emsp;&emsp;initialize critic RNN states $h_{1, V}^{0}, ... , h_{n, V}^{0}$ for each agent ​ 10: &emsp;&emsp;&emsp;initialize subtasks $k_1^0, ..., k_n^0$ for each agent // ***Initialize subtasks with $n_k$-dim zero vector*** ​ 11: &emsp;&emsp;&emsp;**for** timestep $t=1$ **to** $T$ **do** ​ 12: &emsp;&emsp;&emsp;&emsp;**for all** agents $i$ **do** ​ 13: &emsp;&emsp;&emsp;&emsp;&emsp;$k_i^t, h_{i, \pi}^t = \pi_{En}(o_i^t, k_i^{t-1}, h_{i, \pi}^{t-1};\theta_{En}^{\pi})$ // ***Call for Scalable Subtask Encoder Module*** ​ 14: &emsp;&emsp;&emsp;&emsp;&emsp;$a_i^t = \pi_{De}(o_i^t, k_i^t; \theta_{De}^{\pi})$ // ***Call for Adaptive Action Decoder Module*** ​ 15: &emsp;&emsp;&emsp;&emsp;&emsp;$v_i^t, h_{i, V}^t = V(s_i^t, h_{i, V}^{t-1}; \theta^V)$ ​ 16: &emsp;&emsp;&emsp;&emsp;**end for** ​ 17: &emsp;&emsp;&emsp;&emsp;Execute actions $\pmb{a^t}$, observe $r^t, s^{t+1}, \pmb{o^{t+1}}$ ​ 18: &emsp;&emsp;&emsp;&emsp;$\tau += [s^t, \pmb{o^t}, \pmb{h_{\pi}^t}, \pmb{h_V^t}, \pmb{k^t}, \pmb{a^t}, r^t, s^{t+1}, \pmb{o^{t+1}}]$ ​ 19: &emsp;&emsp;&emsp;**end for** ​ 20: &emsp;&emsp;&emsp;$step += T$ ​ 21: &emsp;&emsp;&emsp;Compute advantage estimate $\hat{A}$ via GAE on $\tau$ ​ 22: &emsp;&emsp;&emsp;Compute reward-to-go $\hat{R}$ on $\tau$ ​ 23: &emsp;&emsp;&emsp;Split trajectory $\tau$ into chunks of length $L$ ​ 24: &emsp;&emsp;&emsp;**for** $l=0,L,2L,...,(T//L)*L$ **do** ​ 25: &emsp;&emsp;&emsp;&emsp;$D = D \cup (\tau[l:l+L], \hat{A}[l:l+L], \hat{R}[l:l+L])$ ​ 26: &emsp;&emsp;&emsp;**end for** ​ 27: &emsp;&emsp;**end for** ​ 28: &emsp;&emsp;**for** mini-batch $k=1, ... ,K$ **do** ​ 29: &emsp;&emsp;&emsp;$b \leftarrow$ random mini-batch from $D$ with all agent data ​ 30: &emsp;&emsp;&emsp;**for** each data chunk $c$ in the mini-batch $b$ ​ 31: &emsp;&emsp;&emsp;update RNN hidden states for $\pi_{En}$ and $V$ from first hidden state in data chunk ​ 32: &emsp;&emsp;&emsp;**end for** ​ 33: &emsp;&emsp;**end for** ​ 34: &emsp;&emsp;Adam update $\theta_{En}^{\pi}, \theta_{De}^{\pi}$ on actor loss with data $b$ ​ 34: &emsp;&emsp;Adam update $\theta^V$ on critic loss with data $b$ ​ 36: &emsp;**end while** ​ 37: &emsp;**Return** $\pi_{En}, \pi_{De}, V$ --- Reply to Comment 1.1.2: Title: Sincerely invite for further discussion. Comment: Dear reviewer, as the discussion stage draws to a close, we would greatly appreciate your opinions. Are there any additional concerns or suggestions that you would like to share? We are more than willing to engage in a constructive discussion with you.
Summary: The paper introduces the DT2GS framework to improve the generality of agents in Multi-Agent Reinforcement Learning (MARL) by decomposing a task into generalizable subtasks. The authors use a scalable subtask encoder to identify appropriate subtasks based on historical entity-observation pairs, instead of action-observation pairs, which enhances generalizability. With the subtask selected by the scalable subtask encoder, the paper employs an adaptive subtask semantics with a self-attention mechanism to compute adaptive action semantics that are then passed to an MLP to obtain action distribution and Q values. This adaptive action semantics contains information about the current subtask selected as well as observations from all the entities, making it sufficient for computing action and Q values. The self-attention mechanism used does not have a position embedding, which is suited to the requirement of scalability across tasks regardless of the number of entities and permutation invariance. Empirical results demonstrate that DT2GS possesses sound zero-shot generalization capability across tasks, exhibits sufficient transferability, and outperforms existing methods in both multi-task and single-task problems. Strengths: 1. The proposed framework's structure is well suited to the problem setting, including the scalable subtask encoder and the adaptive subtask semantics. 2. Experimental results are promising, demonstrating improved performance compared to baseline methods, as well as providing insightful analysis on subtask percentage. Weaknesses: 1. A more detailed introduction of related work could be helpful for readers if space allows, such as the network design or methodology of ASN. 2. In Figure 1, the right part uses "Env Cognition" and "Cognition Encoder: MLP". Adding the word "cognition" to appropriate locations in the text around Equation 2 would make Figure 1 match the text better. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Figure 4 only reports the performance of each algorithm trained on the source task but tested on the target task. What is the performance of algorithms on the source task? If one algorithm performs poorly even on the source task, then it makes little sense to test it on a different target task. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors did not explicitly discuss the limitations of their work in the paper. However, no major limitations or potential negative societal impacts have been identified in this review. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your detailed review. We are glad to discuss your concerns one by one. Any further discussion will be appreciated. 1. > **Weakness 1**: A more detailed introduction of related work could be helpful for readers if space allows, such as the network design or methodology of ASN. We added a section of related work to the appendix of the revised paper. For details, please see **Weakness 1** of our global response. 2. > **Weakness 2**: In Figure 1, the right part uses "Env Cognition" and "Cognition Encoder: MLP". Adding the word "cognition" to appropriate locations in the text around Equation 2 would make Figure 1 match the text better. We added "cognition" in our revised paper to ensure consistency between text and figure. Specifically, we modified the content of Line 125 to "and the observation embedding $e_i^t$, which is also referred to as Env Cognition of agent $i$ in Figure 1, is constructed as: ". 3. > **Question**: Figure 4 only reports the performance of each algorithm trained on the source task but tested on the target task. What is the performance of algorithms on the source task? If one algorithm performs poorly even on the source task, then it makes little sense to test it on a different target task. The asymptotic performance of DT2GS, UPDeT, and ASN_G are shown in the table as follows (values before '/' are the algorithms' performance of test winning rate on the source task, and values after '/' are the zero-shot performance of the model on target tasks). As we can see, although DT2GS has the same/similar asymptotic performance as UPDeT/ASN_G on all/most source tasks, its zero-shot generalization performance is significantly higher than UPDeT/ASN_G, achieving an average test winning rate surpass of about 22%/34% over all 8 zero-shot generalization scenarios. | source / target | DT2GS | UPDeT | ASN_G | | ------------------- | --------- | --------- | ------------- | | 3s_vs_4z / 3s_vs_5z | 1 / 0.391 | 1 / 0.203 | 0.109 / 0 | | 2s3z / 3s5z | 1 / 0.336 | 1/ 0.062 | 0.547 / 0.208 | | 3s5z / 3s5z_vs_3s6z | 1 / 0.875 | 1 / 0.648 | 0.969 / 0.547 | | 8m / 8m_vs_9m | 1 / 0.734 | 1 / 0.391 | 0.938 / 0.141 | | 8m / 10m_vs_11m | 1 / 0.609 | 1 / 0.539 | 0.938 / 0.188 | | 8m / 25m | 1 / 0.367 | 1 / 0.172 | 0.938 / 0.039 | | 8m_vs_9m / 25m | 1 / 0.484 | 1 / 0.234 | 0.469 / 0.164 | | 8m_vs_9m / 5m_vs_6m | 1 / 0.180 | 1 / 0.070 | 0.469 / 0 | --- Rebuttal Comment 1.1: Title: Thanks for your reply. Comment: Thank you for submitting your rebuttal. The authors have made a commendable effort to address and answer my concerns, which is greatly appreciated.
Rebuttal 1: Rebuttal: 1. > **Weakness 1**: Lack of a section for related work. Due to space limitations, we only provided a brief introduction to the related work in the Introduction section. Based on the feedback received, we added a detailed section of related work to the appendix of the revised paper. And the related work is organized according to the following structure: Firstly, we classify the methods of transfer learning across tasks in online MRAL into two categories: Network-design-based methods [1, 2, 3] and Task-embedding-based methods [4, 5, 6]. Secondly, we also summarized the methods of transfer learning across tasks belonging to offline MARL [7] and Curriculum Learning [8, 9] in MRAL. Thirdly, we summarized the related work in MARL focusing on concepts like skills/options/roles [10, 11, 12, 13], which are similar to subtasks studied in our method. Finally, we explained in detail the network design of ASN [2], the action semantics mentioned in ASN, and the population-invariant network (PIN) structure developed in UPDeT, which can help readers to understand the pipeline of our method. Previous works focusing on multi-agent transfer learning based on network-design or task-embedding, but lack of efficient knowledge reuse leading to the limited transferability. Our work leverages the knowledge reuse by ensuring semantic consistency between diverse tasks and prevents over-fitting of source tasks, which greatly improves transferability and generalization capability. Results show that DT2GS model can achieve zero-shot generalization capability and more robust transferability, achieving an average transfer speedup of 100×. > [1] Hu et al., “UPDeT: Universal Multi-agent Reinforcement Learning via Policy Decoupling with Transformers,” ICLR 2021. > > [2] Wang et al., “Action Semantics Network: Considering the Effects of Actions in Multiagent Systems”, ICLR 2020. > > [3] Agarwal et al., "Learning transferable cooperative behavior in multi-agent teams", ArXiv 2019. > > [4] Liu et al., "Value function transfer for deep multi-agent reinforcement learning based on n-step returns", IJCAI 2019. > > [5] Qin et al., "Multi-agent policy transfer via task relationship modeling", ArXiv 2022. > > [6] Schäfer et al., "Learning task embeddings for teamwork adaptation in multi-agent reinforcement learning", ArXiv 2022. > > [7] Zhang et al., "Discovering generalizable multi-agent coordination skills from multi-task offline data", ICLR 2022. > > [8] Long et al., "Evolutionary population curriculum for scaling multi-agent reinforcement learning", ICLR 2019. > > [9] Wang et al., "From few to more: Large-scale dynamic multiagent curriculum learning", AAAI 2020. > > [10] Yang et al., “LDSA: Learning Dynamic Subtask Assignment in Cooperative Multi-Agent Reinforcement Learning,” NeurIPS 2022. > > [11] Wang et al., “RODE: Learning Roles to Decompose Multi-Agent Tasks,” ICLR 2021. > > [12] Wang et al., "Roma: Multi-agent reinforcement learning with emergent", ICML 2020. > > [13] Yang et al., "Hierarchical cooperative multi-agent reinforcement learning with skill discovery", AAMAS 2020. 2. > **Enhancement 1**: Added experimental comparisons with some related work. We added RODE, ROMA, and HSD into our reference, and added RODE, ROMA, HSD, and LDSA as baselines in our experiments. The new results are shown in the table below. | task | DT2GS | UPDeT | MAPPO | ASN | LDSA | RODE | ROMA | HSD | | ------------ | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | | 3s5z_vs_3s6z | 1.00 (0.02) | 1.00 (0.17) | 0.00 (0.23) | 0.84 (0.11) | 0.00 (0.00) | 0.00 (0.00) | 0.01 (0.03) | 0.00 (0.00) | | 5m_vs_6m | 0.94 (0.07) | 0.50 (0.30) | 0.84 (0.06) | 0.06 (0.04) | 0.68 (0.11) | 0.48 (0.22) | 0.67 (0.12) | 0.00 (0.00) | | 6h_vs_8z | 1.00 (0.03) | 0.91 (0.08) | 0.47 (0.18) | 0.00 (0.01) | 0.01 (0.02) | 0.03 (0.03) | 0.15 (0.17) | 0.00 (0.00) | | corridor | 0.95 (0.05) | 0.91 (0.07) | 1.00 (0.02) | 0.09 (0.05) | 0.80 (0.13) | 0.00 (0.00) | 0.01 (0.04) | 0.02 (0.05) | The values in the table represent the mean and variance of the test winning rate. As we can see, DT2GS outperforms all baselines. Besides, a more intuitive graph regarding this result can be found in the PDF file we submitted. 3. > **Enhancement 2**: Added experiments on other environments. We conducted zero-shot experiments on the physical deception task (Spread) in the multi-agent particle world environments (MPE) [1]. The experimental results are as follows. | source/target | DT2GS | UPDeT | ASN_G | | ------------- | --------------- | --------------- | --------------- | | 3/2 | -84.44 (5.85) | -83.61 (2.31) | -97.43 (8.67) | | 3/3 | -170.88 (23.18) | -206.39 (9.53) | -212.66 (7.11) | | 3/4 | -265.79 (52.21) | -293.92 (14.58) | -309.32 (22.61) | | 3/5 | -371.11 (63.95) | -370.28 (31.05) | -459.51 (19.16) | | 3/6 | -472.74 (99.25) | -496.00 (72.95) | -507.66 (21.75) | For example, "3/4" in the column of "source/target" indicates that the source task is set by 3 agents and 3 landmarks, while the target task is set by 4 agents and 4 landmarks. And the values in the column of the algorithm represent the mean and variance of the episode rewards. As we can see, DT2GS outperforms UPDeT and ASN\_G in terms of zero-shot generalization capability, achieving an average episode reward surpass of about 17.05 and 44.32, respectively. And a more intuitive bar chart regarding this experiment can be found in the PDF file we submitted. > [1] Lowe, Ryan, et al., "Multi-agent actor-critic for mixed cooperative-competitive environments", NeurIPS 2017. Pdf: /pdf/defe64526ec4404e4b011e5885b5a55b20558e96.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Improving Robustness with Adaptive Weight Decay
Accept (poster)
Summary: To improve the robustness, this paper proposes a method to determine the weight decay hyper-parameter during adversarial training adaptively. The key idea is to select the proper weight decay parameter $(\lambda_t )$ to keep the decay over the gradient (DoG) as a constant. The proposed method is evaluated on image classification and label noises tasks, comparing with the adversarial training with fixed weight decay hyper-parameter $(\lambda_{wd})$. According to the experimental results, adversarial training with dynamic $\lambda_t$ performs better. Strengths: 1) The comparison results visualized by figures (Figures 2 & 3) are concise and clear. Weaknesses: 1) The writing is not good, and the research gap and motivation are vague. 2) How this method will benefit pruning, as mentioned in the abstract, is not well supported. 3) The essential part of determining the constant value of DoG is not clearly stated. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1) The way to organize the paper does not look good to me. The introduction part (lines 16 - 37) is too short, and the related work section (lines 149 - 155) appears after the proposed method. The relations between the proposed work and those existing works are blurred. I cannot see a clear clue about how the idea came up or what bottlenecks this proposed method aims to solve. 2) About the grid search strategy for $\lambda_{wd}$ and the selection for $DoG$. Taking Figure 3 as an example, I am curious about how to search for those two hyper-parameters. What is the search interval, and what is the search step? Will the search stage be very costly? Can there be a more efficient guide to deciding the constant value of DoG? 3) About the generality of this method. The proposed method can only be utilized with cross-entropy plus $l_2$-norm regularizer? If yes, I am concerned about the generality of this proposed method. 4) About the benefits that this method can bring to network pruning. This point is mentioned in the abstract (line 14) and Section 3 (line 219). However, this paper does not provide any mathematical analysis or experimental results. More important, the listed references (lines 218 - 219) are not state-of-the-art. Thus, I think the claim is a bit exaggerated. 5) About the editorial issues that make the reading difficult. The proposed method is called Adaptive Weight Decay / AWD for a while and DoG for a while. I suggest using a fixed abbreviation for the proposed method. Besides, many descriptions are inconsistent, though not big issues from the grammar aspect, they harm the reading experience significantly: - when mentioning a section, lines 31 & 35, lines 146 & 212, and lines 190 & 195. There are at least three different ways to mention the section. - when mentioning a figure, line 88 and line 99 are inconsistent. - when defining a variable, the definitions of $w$ are different in line 45 and line 68, parameter or parameters? - when mentioning the algorithm, lines 127 and line 133 are different. - … Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer 3gZW for their insightful comments and great editorial suggestions. We will incorporate these suggestions in the final version of the paper. Please review our response to some of the questions you asked. > How this method will benefit pruning, as mentioned in the abstract, is not well supported. We thank you for this comment. Please note that we are not claiming state-of-the-art or even on-par performance for pruning. However, to incorporate your feedback, we have changed the language used in our manuscript regarding the pruning to reflect our claims more accurately. We claim there is more potential for pruning once the network is trained with adaptive weight decay than the non-adaptive method. The intuition for this is due to our observations in the pruning experiments. Unfortunately, we moved the pruning experiments to the appendix in favor of preserving space for more critical/relevant experiments in the main body of the paper. But here is a brief overview of the experiments: We train a WRN28-10 with both the adaptive and non-adaptive methods on the CIFAR100 dataset with various values for learning rate. The results show that the adaptive trained network outperforms the non-adaptive method, regardless of the learning rate used to train each network. Then we prune 70% of the variables of each of these networks with a non-iterative L1-norm based method and evaluate the accuracy of the networks after pruning. The adaptive trained networks still outperform the non-adaptive networks. The results are summarized in Figure 9 in the appendix. > More important, the listed references (lines 218 - 219) are not state-of-the-art. Thus, I think the claim is a bit exaggerated. We want to clarify that the two references mentioned in lines 218-219 are the earliest research that propose the weight decay method for the first time. We cite them as the original papers that proposed the method, and we do not use them as a baseline for comparison. > About the generality of this method. The proposed method can only be utilized with cross-entropy plus l2-norm regularizer? If yes, I am concerned about the generality of this proposed method. In the paper, we focus on this very common setting. However, we have noticed some colleagues get inspired by the robustness benefits of this adaptive formulation and its use case for balancing different terms in the objective and utilizing it in other settings such as localization. > About the grid search strategy for λwd and the selection for DoG. Taking Figure 3 as an example, I am curious about how to search for those two hyper-parameters. What is the search interval, and what is the search step? Will the search stage be very costly? Can there be a more efficient guide to deciding the constant value of DoG? We are glad you asked this question and would like to thank you for asking. In our earliest experiments, we performed a joint grid search on the learning rate and weight decay hyper-parameters to study the common properties of well performing sets of hyper-parameters. We monitored different statistics like the norm of weights, the norm of gradients, etc. We plotted the ratio between the norm of gradients coming from weight decay to the norm of gradients coming from the main loss. We observed that experiments that performed better had similar values for this metric, which we later called DoG. We conducted another experiment that would enforce the DoG value to be a constant during the entire training, which we later renamed Adaptive Weight Decay. To answer your question, in general, tuning the hyper-parameter of AWD will result in a better result; however, if, due to limited resources, it is infeasible to conduct a grid search or if the question is how to decide the initial range for the grid search, assuming that we know a good set of hyper-parameters for the non-adaptive method, we can estimate a well-performing hyper-parameter for the adaptive method. We suggest training the network with the non-adaptive (WD) method and monitor the average DoG during the training until the optimization converges. Based on our experience, the average DoG will be a very close to and a fair approximation for the best-performing DoG hyper-parameter after the grid search. To prove this point, we leveraged this method to find a good hyper-parameter for the non-adaptive method and reproduced results similar to that of Rebuffi et al. [1] with the adaptive method. We trained a WRN-28-10 with Swish activation, with a batch size of 1024 for 800 epochs, with the extra data similar to the experiments in Table 2 of Rebuffi et al. Since sweeping the hyper-parameters would require considerable resources, we used the hyper-parameters explained in Rebuffi et al. as our hyper-parameter for WD. To estimate a good hyper-parameter for AWD, we monitored the average(DoG) value during the training of the non-adaptive method. Table G1 summarizes the comparison between WD and AWD with extra data. |Name|Lambda|Natural Acc|AutoAttack| |:-:|:-:|:-:|:-:| |Rebuffi et al.|WD=0.0005|89.42|63.05| |Rebuffi + AWD|AWD=0.18|**90.53**|**63.55**| Table G1: Performance of AWD with additional data. For all experiments through out the paper and the rebuttal, we used the average(DoG) of WD experiment (i.e. 0.02) to get an estimate for the optimum DoG. Then we used a geometric progression with 16 steps between 2x smaller (i.e. 0.01) and 2x larger (i.e. 0.04) values of the estimated DoG for our grid search. > About the editorial issues that make the reading difficult. The proposed method is called Adaptive Weight Decay / AWD for a while and DoG for a while. Thanks for this suggestion. We strongly believe that your suggestion can improve the readability of our manuscript and we will update the next version of manuscript to address this issue. [1] Rebuffi, S.A, et al. “Fixing data augmentation to improve adversarial robustness”. arXiv:2103.01946 --- Rebuttal 2: Title: Response to the Authors Comment: Thanks for the detailed explanations about my questions about: 1) the relations between the proposed method and pruning and, 2) the hyper-parameter selection strategy. Based on that, I would like to raise my score to 6.
Summary: This paper proposes a simple but efficient way to improve model robustness: Adaptive Weight Decay, which automatically tunes the hyper-parameter for weight decay during each training iteration. Experimental results prove that this method significantly improves the robustness of the model on multiple datasets. Strengths: 1. The paper is well-written and the proposed methods are clearly formulated. 2. Experimental results prove that this method significantly improves the robustness of the model on multiple datasets. Weaknesses: 1. It would be better if more theoretical explanations about AWD could be provided. 2. Although the proposed method has a great improvement over the baseline method, it does not compare with other SOTA methods. In particular, on CIFAR10, the AWD only achieves 50.03% robustness under AA attack using the WideResNet-28-10. As far as I know, it is less robust than other methods (e.g., early stop). 3. The hyperparameter (DoG) of AWD has a great influence on the result, which makes the proposed method not so adaptive. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See Weaknesses Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: See Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you, reviewer pbDj, for your constructive criticism and insightful feedback. Please review our response to the concerns raised. > It would be better if more theoretical explanations about AWD could be provided. We absolutely agree with your point. A theoretical analysis would add more value to the paper. In this empirical paper, we tried our best to explain and verify the properties of our method by performing thorough experimental analysis and have shown significant benefits of using AWD for robustness despite its simplicity. We agree that theoretical analysis of this method is an interesting topic for future works. > Although the proposed method has a great improvement over the baseline method, it does not compare with other SOTA methods. In particular, on CIFAR10, the AWD only achieves 50.03% robustness under AA attack using the WideResNet-28-10. As far as I know, it is less robust than other methods (e.g., early stop). Thank you for your suggestion regarding comparison with other SOTA methods on CIFAR-10. Please note that Table 1 from the original paper was mainly used as a motivation table, and while we used validation-based early stopping for both conventional WD and AWD, we had not explored other methods, nor had we done experiments on architectures used for reporting CIFAR-10 numbers in many of the SOTA papers. As you requested, we have conducted experiments on WRN 32-10 (the common choice for running SOTA experiments) and have summarized our results in the Table P1. This Table has the same format as Table 2 in the main body but focuses on comparisons to more advanced methods on the CIFAR-10 dataset. |Dataset|Method|WRN|Aug|Epochs.|Nat|AutoAttack| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |CIFAR-10 (New)|AT (Madry et al., 2017)|32-10|P&C|100|87.80|48.46| |CIFAR-10 (New)|TRADES (Zhang et al., 2019)|32-10|P&C|100|86.36|53.40| |CIFAR-10 (New)|MART (Wang et al., 2020)|32-10|P&C|100|84.76|51.40| |CIFAR-10 (New)|FAT (Zhang et al., 2020a)|32-10|P&C|100|**89.70**|47.48| |CIFAR-10 (New)|AWP (Wu et al., 2020)|32-10|P&C|100|57.55|53.08| |CIFAR-10 (New)|GAIRAT (Zhang et al., 2020b)|32-10|P&C|100|86.30|40.30| |CIFAR-10 (New)|MAIL-AT (Liu et al., 2021)|32-10|P&C|100|84.83|47.10| |CIFAR-10 (New)|MAIL-TR (Liu et al., 2021)|32-10|P&C|100|84.00|53.90| |CIFAR-10 (New)|AWD (ours) with Dog=0.022 + ASAM|32-10|P&C|100|88.55|**54.04**| Table P1: CIFAR-10 robustness comparisons As it can be seen, despite its simplicity, our method outperforms many great algorithms on the CIFAR-10 dataset both in terms of natural accuracy and robustness. We will also update our manuscript's next version to include these results. > The hyperparameter (DoG) of AWD has a great influence on the result, which makes the proposed method not so adaptive. We do agree that correctly tuning this hyper-parameter, similar to many other hyper-parameters, can result in more favorable results. However, we note that the robustness/performance of AWD is less sensitive and easier to tune compared to the traditional weight-decay hyper-parameter, lambda (e.g., please see Fig. 11 in the appendix.) The comparatively lower sensitivity has enabled us to re-use the D0G hyper-parameter (0.022) found before for various experiments during the rebuttal period with different settings (See Table R1 and Table R2). This hyper-parameter value has been working fine and has resulted in comparable performances to the tuned hyper-parameter. Please see for example the Table P2 which we use the same setting as that of Table R2 and report the robust accuracy for both the optimal DoG parameter and the fixed value (0.022) for various choices of epochs. |Dataset|DoG|Epochs.|Nat|AutoAttack| |:-:|:-:|:-:|:-:|:-:| |CIFAR-100|Re-use (0.022)|50|62.85|29.25| |CIFAR-100|Optimize (0.024)|50|61.43|29.40| |CIFAR-100|Re-use (0.022)|100|64.49|29.70| |CIFAR-100|Optimize (0.024)|100|62.72|29.82| |CIFAR-100|Re-use (0.022)|150|64.17|29.94| |CIFAR-100|Optimize (0.022)|150|64.17|29.94| |CIFAR-100|Re-use (0.022)|200|64.37|29.55| |CIFAR-100|Optimize (0.022)|200|64.37|29.55| |CIFAR-100|Re-use (0.022)|250|63.24|29.68| |CIFAR-100|Optimize (0.022)|250|63.24|29.68| |CIFAR-100|Re-use (0.022)|300|63.35|29.28| |CIFAR-100|Optimize (0.024)|300|61.03|29.56| Table P2: AWD is not sensitive to its hyper-parameter. --- Rebuttal Comment 1.1: Title: Reply to the author Comment: Thanks for the authors' response. After reading the rebuttals, I still have some concerns. 1 Please describe the experimental setup in Figure 3 in detail. it is confusing. 2 From Figure 3 of the submitted manuscript, the hyperparameter (DoG) of AWD has a great influence on the result, If we face a new dataset or network, how should we choose DoG? --- Reply to Comment 1.1.1: Comment: > 1 Please describe the experimental setup in Figure 3 in detail. it is confusing. Thank you, dear Reviewer pbDj, for helping us identify that the plotting function did not sort the x-axis in figure3. We believe it may have been the source of confusion? While the results remain intact, we will fix the plotting in the camera-ready/future versions. Since we could not update the figure during the discussion period, we have gathered the information of that figure and compiled it in table format (and included clean/nat accuracy) where the hyper-parameters are sorted (Table1-Table4). We would also like to use this opportunity to clarify the experiments in Figure 3. We train WRN28-10 networks using traditional weight-decay (with lambda hyperparameter) and another time using AWD (with DoG hyperparameter). In these experiments, we vary the hyper-parameters [by doing a grid search within a reasonable range] and do validation-based-early-stopping to pick the most robust checkpoint using the held-out validation set. We then report the robust accuracy on the test set by attacking the models using AutoAttack (AA). Regarding the range of the hyperparameters searched, we search for twice as many hyperparameters for WD compared to AWD and ensure that the optimal point is not at one of the two ends of the search range. |DoG|C10 - Nat|C10 - AA|C100 - Nat|C100 - AA| |:-:|:-:|:-:|:-:|:-:| |0.02|**88.05**|49.01|**62.85**|25.75| |0.02181|87.08|**50.03**|61.39|**27.15**| |0.02378|86.78|49.72|59.51|26.61| |0.02594|85.90|50.00|55.97|26.61| |0.02828|84.50|49.21|51.12|25.70| |0.03084|82.25|47.93|47.24|24.63| |0.03364|77.15|45.53|41.16|22.69| |0.03668|72.91|44.23|36.30|20.64| |0.04|67.56|41.35|31.03|18.19| Table 1: Table from Figure 3 -- AWD for CIFAR10 and CIFAR-100. |Lambda|C10 - Nat|C10 - AA|C100 - Nat|C100 - AA| |:-:|:-:|:-:|:-:|:-:| |0.00005|42.62|82.66|20.86|54.90| |0.000067|41.97|83.26|19.37|53.95| |0.000089|43.43|81.89|20.39|59.91| |0.000119|43.39|82.27|21.08|59.50| |0.000158|43.28|82.71|21.51|59.45| |0.000211|43.38|85.97|21.66|59.37| |0.000281|43.64|82.11|21.90|60.01| |0.000375|44.19|86.34|21.75|59.61| |0.0005|44.03|86.40|**22.53**|60.15| |0.00067|44.24|**86.59**|21.61|60.54| |0.00089|**45.19**|84.31|21.55|**61.33**| |0.00119|43.00|84.29|21.19|57.75| |0.00158|44.15|85.17|21.29|59.31| |0.00211|43.39|85.11|22.06|59.99| |0.00281|43.35|85.34|22.07|58.26| |0.00375|42.27|81.31|20.49|50.66| |0.005|37.23|72.96|17.76|41.96| Table 2: Table from Figure 3 -- WD for CIFAR-10 and CIFAR-100 |Lambda|Tiny - Nat|Tiny - AA| |:-:|:-:|:-:| |5E-05|45.08|10.44| |6.7E-05|45.15|10.72| |8.9E-05|45.81|10.86| |0.000119|46.51|10.81| |0.000158|43.81|10.75| |0.000211|44.89|11.25| |0.000281|47.04|11.94| |0.000375|48.08|11.58| |0.0005|49.77|11.79| |0.00067|50.24|11.79| |0.00089|50.01|12.41| |0.00119|**52.90**|15.08| |0.00158|52.05|16.42| |0.00211|47.87|**16.73**| |0.00281|38.85|14.25| |0.00375|24.46|8.41| |0.005|6.67|2.70| Table 3: Table from Figure 3 -- WD for TinyImageNet |Dog|Tiny - Nat|Tiny - AA| |:-:|:-:|:-:| |0.01361|**54.35**|17.33| |0.0147|53.11|18.79| |0.01587|51.34|19.38| |0.01714|48.46|**19.74**| |0.01852|45.23|19.74| |0.02|41.90|19.33| |0.02181|36.34|17.68| |0.02378|30.93|15.74| |0.02594|25.97|13.89| Table 4: Table from Figure 3 -- AWD for TinyImageNet > 2 From Figure 3 of the submitted manuscript, the hyperparameter (DoG) of AWD has a great influence on the result, If we face a new dataset or network, how should we choose DoG? Thank you for raising this question. Given that a similar question was also raised by reviewer 3gZW, we will include a reference to this in the appendix for the future revision. While as seen in figure3, the hyper-parameter value of 0.021 has resulted in improvements over the optimal hyper-parameter for traditional weight decay across all 3 datasets, the results can be improved even further by doing a proper grid search. The preferred method for finding the best hyper-parameter for adaptive weight decay (i.e., DoG) is to treat it as any other hyper-parameter and perform a grid-search. But having a good estimate for the range used during grid search has value. Hence, we present a method to find a good estimate which works well in practice. We suggest that if a training pipeline exists that is fully tuned for non-adaptive/traditional weight decay, we can get a reasonable estimate for DoG by taking an average (or moving average) of the DoG values from all iterations during training with traditional weight decay. This estimate works well in practice for training with AWD and comes with almost no additional computation overhead. For most experiments through out the paper and the rebuttal, we used the average(DoG) of WD experiment (e.g. 0.02) to get an estimate for the optimum DoG. Then we used a geometric progression with 16 steps between 2x smaller (e.g. 0.01) and 2x larger (e.g. 0.04) values of the estimated DoG for our grid search.
Summary: This paper proposed adaptive weight decay which is balancing the gradient of the loss funciton such as the coss-entropy loss and the weight decay term. Althogh the proposed method is a simple method, it empirically improves adversarial robustness and a classification with label noise. Strengths: The strong point of this paper is the simplicity of the proposed method. The proposed method is just adaptively rebalancing the gradient of the loss and weight decay term. It empiciallly outperforms the existing studies of adversarial robustness. Weaknesses: The main concern with this method is that we do not know what the algorithm is ultimately optimizing. We cannot know if the algorithm converges even in optimization problems such as convex optimization. We also cannot know why this method is good for adversarial robustness and label noize. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Q1. I would like to know if this algorithm is designed to minimize the original optimization problem. It is difficult to set up in general, but usually when considering adaptive optimization algorithms like this, it is important for the safety of the algorithm that convergence is guaranteed, at least for optimization of convex functions. Q2. What is inevitable about equation (7). Of course, you are scaling with the gradient of the cross entropy loss, but I would like to know the direct effect of why that is effective for adversarial robustness. Q3. Why do robust validation accuracy and loss suddenly improve around 200 epochs? Since it is possible that the performance is good when stopping early at a good 200 epochs by chance, we would like to see a discussion on convergence. Q4. What happens if we increase the number of epochs up to 300 epochs? Q5. In Table 2, Epo varies for existing methods; if each method is adjusted in terms of early stopping, this may not be a problem, but the conditions should be as consistent as possible. Q6. Does the proposed method work well in a more natural setting such as ImageNet-A? The adversarial robustness setting is a bit artificial, which is a problem of this fields itself. Q7. What is the performance in the normal problem setting instead of adversarial robustness? In particular, do we see a performance decrease with different DOG settings? If so, how do we determine the final DOG value? Even if the user makes the final decision, if the performance of the normal setting is significantly degraded as a result of considering adversarial robustness, we may not want to use it as a method. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: No convergence guarantees for adaptive algorithms. No theoretical basis for performance improvement against adversarial robustness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer 61Q2 for their insightful comments. Please review our response to questions asked. > The main concern with this method is that we do not know what the algorithm is ultimately optimizing. We cannot know if the algorithm converges even in optimization problems such as convex optimization. We also cannot know why this method is good for adversarial robustness and label noize. We have tried our best to explain our intuition in section 2.2.1. As suggested by the literature, regularizers prevent overfitting and overfitting hurts generalization of DNNs on unseen data. We experimentally showed that AWD is a stronger regularizer, compared to the non-adaptive method. We believe less overfitting is helping the resulting network to have a better performance on both adversarial robustness and label noise. > Q1. I would like to know if this algorithm is designed to minimize the original optimization problem. It is difficult to set up in general, but usually when considering adaptive optimization algorithms like this, it is important for the safety of the algorithm that convergence is guaranteed, at least for optimization of convex functions. As an empirical paper, we verified the effectiveness of this method through various experiments. While our final formulation might seem more complicated, our method can be re-written as: $$ min_{w}: Loss(Net_{w}(x), y) + C_{i} \|w\|_1 $$ Which is very similar to weight decay, with L1 norm penalized instead of L2 norm. The main difference between AWD and non-adaptive method is that the regularization hyper-parameter in the L1 formula is changing for every iteration in AWD. Intuitively, we argue that if for a convex loss function $Loss$ and any constant $C$, the L1 formulation is a convex optimization problem, then at every step of our algorithm, we are solving a convex problem as well. In practice we see that the fluctuations in $C_{i}$ are rather small and in practice, all experiments that we have launched to this day have successfully converged. > Q2. What is inevitable about equation (7). Of course, you are scaling with the gradient of the cross entropy loss, but I would like to know the direct effect of why that is effective for adversarial robustness. The adaptive reformulation (scaling) can result in: 1) easier optimization of both objectives in the main loss in comparison to that of traditional weight-decay as evident by the results in the last column of Table 1; 2) possibility of reaching solutions with smaller weight-norms that could be seen as flatter minima which generalize better; 3) and most importantly, alleviating robust overfitting by allowing for stronger regularizations. > Q3 and Q4. Why do robust validation accuracy and loss suddenly improve around 200 epochs? Since it is possible that the performance is good when stopping early at a good 200 epochs by chance, we would like to see a discussion on convergence. What happens if we increase the number of epochs up to 300 epochs? That is a good observation. To empirically address the question about convergence, we have conducted a set of experiments where we keep everything constant and only vary the number of epochs for training. As it can be seen in Table R2 (in general response), and Table P2 (response to reviewer pbDj), AWD is not very sensitive to the number of training epochs. Our method always outperforms the non-adaptive method, both in terms of natural and adversarial accuracy. > Q5. In Table 2, Epo varies for existing methods; if each method is adjusted in terms of early stopping, this may not be a problem, but the conditions should be as consistent as possible. Thanks for bringing this to our attention. Initially, we had used the common setting from Madry et al., and had trained with 200 epochs. We have included 100 epochs in Table R3 (general response). AWD models trained with 100 epochs have similar performance to those trained with 200 epochs. > Q6. Does the proposed method work well in a more natural setting such as ImageNet-A? The adversarial robustness setting is a bit artificial, which is a problem of this fields itself. Given the limited time and available resources during the rebuttal period, we could not perform ImageNet-A experiments. However, ImageNet results are in the appendix. We are not sure if our method will be helpful for settings where under-fitting is happening. ImageNet is a dataset that is hard to fit, hence the adaptive and non-adaptive method perform similarly. We believe that both methods (traditional and adaptive weight decay) should have similar performance on ImageNet-A. > Q7. What is the performance in the normal problem setting instead of adversarial robustness? In particular, do we see a performance decrease with different DOG settings? If so, how do we determine the final DOG value? Even if the user makes the final decision, if the performance of the normal setting is significantly degraded as a result of considering adversarial robustness, we may not want to use it as a method. Finding a solution which improves both natural accuracy and adversarial robustness is one of the main goals of any robust algorithm. Throughout our adversarial training experiments, AWD results in solutions which improve both natural and robust accuracy under various hyper-parameters and datasets compared to traditional weight-decay. In regards to using AWD with different DoGs outside of the context of adversarial robustness, since in the normal problem settings overfitting is less of an issue, we mainly see on-par performance between AWD and traditional WD when the hyperparameters for each method is appropriately tuned. For example, please see Fig. 11 in appendix. However, we have noticed that even in the normal settings AWD has some robustness benefits over WD such as robustness to learning-rate as evident in Fig. 11 which could make hyperparameter searches easier. --- Rebuttal Comment 1.1: Title: After Rebuttal Comment: Thanks for your reply to my review. I have cleared up some questions, but I still have questions about the following two points. >Q1. I would like to know if this algorithm is designed to minimize the original optimization problem. It is difficult to set up in general, but usually when considering adaptive optimization algorithms like this, it is important for the safety of the algorithm that convergence is guaranteed, at least for optimization of convex functions. What I want to know is not an intuitive explanation, but what is actually being solved as an optimization problem and does the proposed algorithm converge in a simple convex minimization. > Q2. What is inevitable about equation (7). Of course, you are scaling with the gradient of the cross entropy loss, but I would like to know the direct effect of why that is effective for adversarial robustness. You explains this reason in terms of flatter minima and generalize. However, flatness is a formulation based on perturbations in parameter space, while adversarial robustness is a formulation based on perturbations in input space. Thus, this is not a direct explanation. --- Reply to Comment 1.1.1: Comment: We are happy that many of your concerns have been addressed and are grateful for giving us an opportunity to address/clarify the 2 remaining ones. Please find our responses below. > Q1. What I want to know is not an intuitive explanation, but what is actually being solved as an optimization problem and does the proposed algorithm converge in a simple convex minimization. Following your ask, below, we show that for a simple choice of convex functions, our proposed loss function is locally convex (at least near the minimizer), and hence SGD will be able to converge to a stable/stationary local minima. Consider the following simple regression problem where the main objective is MSE: $0.5 \cdot \| x- \beta \|^2$. In this setting, our total loss formulation that includes the adaptive weight-decay regularizer term will be: $0.5 \cdot \| x- \beta \|^2 + c \cdot \frac{|x-\beta|}{|x|}$, where $c=0.5 \cdot DoG$ and $x \neq 0$. We consider all cases for possible choices of $\beta$ and $x$ and show that for all 7 cases, as long as we chose a the hyper-parameter $0 < c < \frac{\beta^2}{2}$ which translates to $ 0 < DoG < \beta^2 $ , the problem is locally convex in those regimes by showing that the second derivative is always positive. |$\beta$|$x$|$\beta$ vs. $x$ Comparison|Minimization|Simplified Minimization|2nd Derivative|Condition for Convexity|$c$ Satisfying Convexity| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |$\beta>0$|$x>0$|$x>\beta$|$0.5\cdot\|x-\beta\|^2+c\frac{ \|x-\beta\| }{ \|x\| }$|$0.5\cdot(x-\beta)^2+c\frac{x-\beta}{x}$|$1-\frac{2\beta c}{x^3}$|$1-\frac{2\beta c}{x^3}>0$|$c<\frac{\beta^2}{2}$| |$\beta>0$|$x>0$|$x<\beta$|$0.5\cdot\|x-\beta\|^2+c\frac{ \|x-\beta\| }{ \|x\| }$|$0.5\cdot(x-\beta)^2-c\frac{x-\beta}{x}$|$1+\frac{2\beta c}{x^3}$|$1+\frac{2\beta c}{x^3}>0$|$c>0$| |$\beta>0$|$x<0$|$x<\beta$|$0.5\cdot\|x-\beta\|^2+c\frac{ \|x-\beta\| }{ \|x\| }$|$0.5\cdot(x-\beta)^2+c\frac{x-\beta}{x}$|$1-\frac{2\beta c}{x^3}$|$1-\frac{2\beta c}{x^3}>0$|$c>0$| |$\beta<0$|$x>0$|$x>\beta$|$0.5\cdot\|x-\beta\|^2+c\frac{ \|x-\beta\| }{ \|x\| }$|$0.5\cdot(x-\beta)^2+c\frac{x-\beta}{x}$|$1-\frac{2\beta c}{x^3}$|$1-\frac{2\beta c}{x^3}>0$|$c>0$| |$\beta<0$|$x<0$|$x>\beta$|$0.5\cdot\|x-\beta\|^2+c\frac{ \|x-\beta\| }{ \|x\| }$|$0.5\cdot(x-\beta)^2-c\frac{x-\beta}{x}$|$1+\frac{2\beta c}{x^3}$|$1+\frac{2\beta c}{x^3}>0$|$c>0$| |$\beta<0$|$x<0$|$x<\beta$|$0.5\cdot\|x-\beta\|^2+c\frac{ \|x-\beta\| }{ \|x\| }$|$0.5\cdot(x-\beta)^2+c\frac{x-\beta}{x}$|$1-\frac{2\beta c}{x^3}$|$1-\frac{2\beta c}{x^3}>0$|$c<\frac{\beta^2}{2}$| |$\beta=0$|Any|-|$0.5\cdot\|x-\beta\|^2+c\frac{ \|x-\beta\| }{ \|x\| }$|$0.5\cdot(x-\beta)^2$|1|Always convex|Any $c$| Table R1: Conditions guaranteeing convexity and convergence of AWD for a simple regression. > Q2. flatness is a formulation based on perturbations in parameter space, while adversarial robustness is a formulation based on perturbations in input space. Thus, this is not a direct explanation. We would like to clarify this possible confusion that we did not mean that the adversarial training could yeild flatter minima, and we completely agree with the reviewer. Our response was in regards to the question about why in terms of adversarial robustness, AWD-trained models perform better compared to those trained with traditional weight decay. To elaborate further, we associate this to the model parameters/weights. As seen in Table 1 from the original paper (we have gathered the most relevant subset below in Table R2), AWD models have significantly smaller weight-norms compared to the best models trained with traditional weight decay. Models with smaller weight-norms are often simpler and could generalize better (as long as the model is still capable of fitting the data). |Dataset|Method|$\|W\|_2$| Nat | Adv| |:-:|:-:|:-:|:-:|:-:| |CIFAR-10|Weight-Decay|35.58|84.31|45.19| |CIFAR-10|AWD|**7.11**|**87.08**|**50.03**| |CIFAR-100|Weight-Decay|51.32|60.15|22.53| |CIFAR-100|AWD|**13.41**|**61.39**|**27.15**| |Tiny ImageNet|Weight-Decay|25.62|47.87|16.73| |Tiny ImageNet|AWD|**15.01**|**48.46**|**19.74**| |SVHN|Weight-Decay|102.11|92.04|44.16| |SVHN|AWD|**5.39**|**93.04**|**47.10**| |FashionMNIST|Weight-Decay|14.39|83.96|78.73| |FashionMNIST|AWD|**9.05**|**85.42**|**79.24**| |Flowers|Weight-Decay|19.94|**90.98**|32.35| |Flowers|AWD|**13.87**|90.39|**39.22**| Table R2: Relevant subset of columns from Table 1 which show that AWD trained models have considerably smaller weight-norms compared to models trained with traditional weight decay which results in simpler models which generalize better.
Summary: The paper proposes adaptive weight decay (AWD) to adaptively tune the weight decay hyperparameter during training. The AWD keeps the ratio of weight decay update and cross-entropy loss update constant for the stability of training. The experiment shows that AWD improves adversarial robustness and reduces robust overfitting in adversarial training. Strengths: The paper is clearly written and easy to follow. The experiment shows the benefit of AWD over several baselines on CIFAR10/100 and Tiny ImageNet. Weaknesses: As a reviewer who reviewed the submission for 3 times, my major concern before is the comparison of AWD with recent baselines like MART and MAIL. I am glad that Table 2 of this version includes such a comparison. The result looks promising to me so I would not raise any major weaknesses for this version. One minor issue is the training time difference between DoG and baselines. Table 2 shows that DoG needs 200 epochs of training while all baselines use 100 epochs. It is not clear how these baselines will perform if we increase the training time, with learning rate decay delayed of course. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Is the proposed method still better if baselines like MART and MAIL are trained for the same time as AWD? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Not discussed in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you, reviewer N3sV, for your insightful comments. Please review our following feedback. > As a reviewer who reviewed the submission for 3 times, my major concern before is the comparison of AWD with recent baselines like MART and MAIL. I am glad that Table 2 of this version includes such a comparison. The result looks promising to me so I would not raise any major weaknesses for this version. Thank you for all your great suggestions throughout this paper's journey. Your constructive comments and thoughts have shaped this paper into what it is today, and we would like show our warmest appreciation and thank you for it. > One minor issue is the training time difference between DoG and baselines. Table 2 shows that DoG needs 200 epochs of training while all baselines use 100 epochs. It is not clear how these baselines will perform if we increase the training time, with learning rate decay delayed of course. Is the proposed method still better if baselines like MART and MAIL are trained for the same time as AWD? Thank you for bringing this to our attention. Initially, for training our models, we followed the conventional 200-epochs-training of robust CIFAR models (from [1]). But comparing methods in terms of their convergence speed by keeping the number of epochs constant is also very valuable. We have ran experiments of our method with keeping the DoG hyper param fixed to 0.022 and varying the number of epochs. The results are summarized Table N1. |Dataset|Method|WRN|Aug|Epochs.|Nat|AutoAttack| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |CIFAR-100|AWD (ours) with Dog=0.022 + ASAM|32-10|P&C|5|26.79|11.03| |CIFAR-100|AWD (ours) with Dog=0.022 + ASAM|32-10|P&C|10|38.10|15.92| |CIFAR-100|AWD (ours) with Dog=0.022 + ASAM|32-10|P&C|20|51.27|21.67| |CIFAR-100|AWD (ours) with Dog=0.022 + ASAM|32-10|P&C|30|58.10|25.03| |CIFAR-100|AWD (ours) with Dog=0.022 + ASAM|32-10|P&C|40|62.01|27.51| |CIFAR-100|AWD (ours) with Dog=0.022 + ASAM|32-10|P&C|50|62.85|29.25| |CIFAR-100|AWD (ours) with Dog=0.022 + ASAM|32-10|P&C|100|**64.49**|29.70| |CIFAR-100|AWD (ours) with Dog=0.022 + ASAM|32-10|P&C|150|64.17|**29.94**| |CIFAR-100|AWD (ours) with Dog=0.022 + ASAM|32-10|P&C|200|64.37|29.55| |CIFAR-100|AWD (ours) with Dog=0.022 + ASAM|32-10|P&C|250|63.24|29.68| |CIFAR-100|AWD (ours) with Dog=0.022 + ASAM|32-10|P&C|200|63.35|29.28| Table N1: CIFAR-100 WRN32-10 AWD varying epochs for studying convergence While at 100 epochs, our results (both nat and robustness) are comparable to our models trained with 200 epochs, we found that even if we further reduce the epochs to 50, we do not see a big degradation of robust accuracy, although the natural accuracy degrades slightly. When we go below 50 epochs, we see that the robustness accuracy degrades. Also in in the Table R2 in the general response, we have done a comparison between WD and AWD for various choices of number of epochs. In Table R2, for the WD case we have tuned the lambda hyper-parameter and picked the best one (the one that results in the highest robust accuracy), as opposed to the AWD, where we only report the results for the fixed hyper-parameter DoG=0.22. We have also updated the Table 2 from our original manuscript to include the results of our method with 100 epochs. For your reference, Table N2 summarizes the results. |Dataset|Method|WRN|Aug|Epochs.|Nat|AutoAttack| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |CIFAR-100|AT (Madry et al., 2017)|32-10|P&C|100|60.13|24.76| |CIFAR-100|TRADES (Zhang et al., 2019)|32-10|P&C|100|60.73|24.90| |CIFAR-100|MART (Wang et al., 2020)|32-10|P&C|100|54.08|25.30| |CIFAR-100|FAT (Zhang et al., 2020a)|32-10|P&C|100|**66.74**|20.88| |CIFAR-100|AWP (Wu et al., 2020)|32-10|P&C|100|55.16|25.16| |CIFAR-100|GAIRAT (Zhang et al., 2020b)|32-10|P&C|100|58.43|17.54| |CIFAR-100|MAIL-AT (Liu et al., 2021)|32-10|P&C|100|60.74|22.44| |CIFAR-100|MAIL-TR (Liu et al., 2021)|32-10|P&C|100|60.13|24.80| |CIFAR-100|AWD (ours) with Dog=0.022 + ASAM|32-10|P&C|200|64.37|29.55| |**CIFAR-100 (New)**|**AWD (ours) with Dog=0.022 + ASAM**|**32-10**|**P&C**|**100**|64.49|**29.70**| Table N2: CIFAR-100 SOTA Comparisons w/o additional data and same number of epochs [1] Madry, Aleksander, et al. "Towards deep learning models resistant to adversarial attacks." arXiv preprint arXiv:1706.06083 (2017). [2] Rebuffi, S.A, et al. “Fixing data augmentation to improve adversarial robustness”. arXiv preprint arXiv:2103.01946. --- Rebuttal Comment 1.1: Title: After Rebuttal Comment: Thanks to the authors for consistently improving this paper along the (kind of long) journal. My concerns are addressed and I will keep my score.
Rebuttal 1: Rebuttal: Dear reviewers. Thanks for your insightful and constructive comments. Below, you may find tables mentioned in the detailed responses for each review. Due to character limitations, please find more detailed explanation of the following tables in the per-reviewer rebuttals. |Eps|Data|Alg|Lambda/DoG|Nat|20|40|60|80|100|AA-SQ|AA-CE|AA-FAB|AA-T|Min| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |2|C10|WD|0.00089|94.2|83.2|83.1|83.2|83.1|83.1|86.9|82.7|82.7|82.5|82.5| |2|C10|AWD|0.02181|**94.3**|**83.6**|**83.6**|**83.6**|**83.6**|**83.6**|**87**|**83.2**|**83.1**|**83**|**83**| |2|C100|WD|0.00067|74.8|55.8|55.8|55.7|55.7|55.7|59.2|54.8|52.9|52.7|52.7| |2|C100|AWD|0.02181|**75.2**|**56.7**|**56.7**|**56.7**|**56.6**|**56.7**|**59.6**|**56**|**53.7**|**53.4**|**53.4**| |4|C10|WD|0.00158|91.7|70.7|70.7|70.5|70.6|70.6|75.3|69.2|69.4|69|69| |4|C10|AWD|0.02181|**92**|**73.1**|**73.1**|**73.1**|**73**|**73**|**77.8**|**72**|**71.7**|**71.3**|**71.3**| |4|C100|WD|0.00089|69.4|42|42|42|41.9|41.9|44.8|40.4|38.9|38.6|38.6| |4|C100|AWD|0.02181|**71.5**|**46.8**|**46.7**|**46.7**|**46.7**|**46.7**|**48.3**|**45.2**|**41.2**|**40.8**|**40.8**| |6|C10|WD|0.00119|88.7|59|58.9|58.9|58.9|58.9|63.9|56|56.5|55.8|55.8| |6|C10|AWD|0.02181|**90**|**62.4**|**62.3**|**62.4**|**62.3**|**62.3**|**67.3**|**60.3**|**60**|**59.5**|**59.5**| |6|C100|WD|0.00067|64.7|32.8|32.6|32.7|32.6|32.6|35|30.9|29.5|29.2|29.2| |6|C100|AWD|0.02181|**66.3**|**39.6**|**39.5**|**39.5**|**39.5**|**39.5**|**39.8**|**37.7**|**33.4**|**33.1**|**33.1**| |8|C10|WD|0.00158|84|49.5|49.2|49.4|49.4|49.4|54.3|46.5|45.1|44.7|44.7| |8|C10|AWD|0.02181|**87.3**|**53.9**|**53.8**|**53.7**|**53.8**|**53.8**|**58.1**|**51.4**|**50.1**|**49.6**|**49.6**| |8|C100|WD|0.00158|56.5|27.7|27.7|27.7|27.5|27.5|28.9|25.9|22.6|22.4|22.4| |8|C100|AWD|0.02181|**61.6**|**33.1**|**33**|**33.1**|**33.1**|**33**|**32.5**|**31**|**26.7**|**26.4**|**26.4**| |16|C10|WD|0.00119|70.4|32|31.7|31.6|31.5|31.7|30.5|27.1|22.6|21.6|21.6| |16|C10|AWD|0.02181|**71.9**|**34**|**33.8**|**33.7**|**33.7**|**33.7**|**33.2**|**29.6**|**26.1**|**25.3**|**25.3**| |16|C100|WD|0.00281|38.3|16.5|16.5|16.5|16.5|16.5|14.2|14.8|11.3|11|11| |16|C100|AWD|0.02181|**41.5**|**19.8**|**19.7**|**19.6**|**19.5**|**19.5**|**17**|**17.7**|**13.9**|**13.4**|**13.4**| Table R1: Performance of AWD vs. WD on adversarially trained WRN28-10 networks with various values for $\epsilon$. |Arch.|Epoch|Data|Alg|Lambda/DoG|Nat|20|40|60|80|100|AA-SQ|AA-CE|AA-FAB|AA-T|AA| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |WRN28-10|50|C10|WD|0.00158|86.5|52.6|52.4|52.4|52.4|52.5|56.7|49.5|48.5|48.0|48.0| |WRN28-10|50|C10|AWD|0.02181|**87.1**|**54.3**|**54.0**|**54.0**|**54.0**|**54.1**|**58.7**|**51.3**|**50.0**|**49.5**|**49.5**| |WRN28-10|50|C100|WD|0.00211|59.5|29.8|29.7|29.7|29.7|29.6|31.1|28.0|25.3|25.1|25.1| |WRN28-10|50|C100|AWD|0.02181|**61.9**|**32.4**|**32.4**|**32.4**|**32.4**|**32.3**|**32.8**|**30.5**|**26.7**|**26.4**|**26.4**| |WRN28-10|100|C10|WD|0.00211|85.8|50.8|50.6|50.5|50.6|50.6|55.4|47.7|47.1|46.6|46.6| |WRN28-10|100|C10|AWD|0.02181|**87.7**|**55.1**|**55.0**|**55.0**|**54.9**|**54.9**|**59.6**|**52.5**|**51.5**|**51.2**|**51.2**| |WRN28-10|100|C100|WD|0.00281|58.2|28.2|28.1|28.0|28.1|28.1|29.4|26.3|23.7|23.4|23.4| |WRN28-10|100|C100|AWD|0.02181|**62.5**|**33.3**|**33.3**|**33.3**|**33.2**|**33.2**|**33.0**|**31.2**|**27.1**|**26.7**|**26.7**| |WRN28-10|200|C10|WD|0.00158|84.1|50.3|50.1|50.1|50.0|50.0|54.8|47.4|46.2|45.7|45.7| |WRN28-10|200|C10|AWD|0.02181|**87.3**|**54.2**|**54.0**|**54.0**|**54.0**|**54.0**|**58.5**|**51.4**|**50.5**|**50.0**|**50.0**| |WRN28-10|200|C100|WD|0.0005|60.5|25.2|25.2|25.1|25.0|25.1|27.0|23.1|22.4|22.2|22.2| |WRN28-10|200|C100|AWD|0.02181|**62.0**|**33.1**|**32.9**|**33.0**|**32.9**|**32.9**|**32.1**|**30.9**|**26.8**|**26.4**|**26.4**| |WRN28-10|300|C10|WD|0.00089|86.2|48.9|48.8|48.6|48.6|48.6|52.8|44.9|45.2|44.5|44.5| |WRN28-10|300|C10|AWD|0.02181|**87.3**|**52.8**|**52.7**|**52.8**|**52.8**|**52.7**|**57.4**|**50.3**|**49.4**|**48.8**|**48.8**| |WRN28-10|300|C100|WD|0.00028|59.5|25.6|25.5|25.5|25.5|25.5|27.0|23.6|22.6|22.3|22.3| |WRN28-10|300|C100|AWD|0.02181|**62.0**|**33.1**|**32.9**|**33.0**|**32.9**|**32.9**|**32.1**|**30.9**|**26.8**|**26.4**|**26.4**| Table R2: Performance of AWD and WD on adversarially trained WRN28-10 trained for various epochs. |Dataset|Method|WRN|Aug|Epochs.|Nat|AutoAttack| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |CIFAR-100|AT (Madry et al., 2017)|32-10|P&C|100|60.13|24.76| |CIFAR-100|TRADES (Zhang et al., 2019)|32-10|P&C|100|60.73|24.90| |CIFAR-100|MART (Wang et al., 2020)|32-10|P&C|100|54.08|25.30| |CIFAR-100|FAT (Zhang et al., 2020a)|32-10|P&C|100|**66.74**|20.88| |CIFAR-100|AWP (Wu et al., 2020)|32-10|P&C|100|55.16|25.16| |CIFAR-100|GAIRAT (Zhang et al., 2020b)|32-10|P&C|100|58.43|17.54| |CIFAR-100|MAIL-AT (Liu et al., 2021)|32-10|P&C|100|60.74|22.44| |CIFAR-100|MAIL-TR (Liu et al., 2021)|32-10|P&C|100|60.13|24.80| |CIFAR-100|AWD (ours) with Dog=0.022 + ASAM|32-10|P&C|200|64.37|29.55| |CIFAR-100 (New)|AWD (ours) with Dog=0.022 + ASAM|32-10|P&C|100|64.49|**29.70**| |CIFAR-10 (New)|AT (Madry et al., 2017)|32-10|P&C|100|87.80|48.46| |CIFAR-10 (New)|TRADES (Zhang et al., 2019)|32-10|P&C|100|86.36|53.40| |CIFAR-10 (New)|MART (Wang et al., 2020)|32-10|P&C|100|84.76|51.40| |CIFAR-10 (New)|FAT (Zhang et al., 2020a)|32-10|P&C|100|**89.70**|47.48| |CIFAR-10 (New)|AWP (Wu et al., 2020)|32-10|P&C|100|57.55|53.08| |CIFAR-10 (New)|GAIRAT (Zhang et al., 2020b)|32-10|P&C|100|86.30|40.30| |CIFAR-10 (New)|MAIL-AT (Liu et al., 2021)|32-10|P&C|100|84.83|47.10| |CIFAR-10 (New)|MAIL-TR (Liu et al., 2021)|32-10|P&C|100|84.00|53.90| |CIFAR-10 (New)|AWD (ours) with Dog=0.022 + ASAM|32-10|P&C|100|88.55|**54.04**| Table R3: Comparison between AWD and other robustness methods.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper studies the overfitting phenomena that is known to happen during adversarial training, with focus in image classification. The main idea hinges on the fact that weight regularization can be an effective technique to prevent such overfitting. The authors propose to augment the regular cross-entropy objective during training with a weight-decay regularization with an adaptive parameter to better utilize this toward preventing overfitting. Due to lack of theoretical analysis, the idea is tested numerically with empirical test on a number of image classification tasks and its effectiveness is shown empirically. Strengths: The main advantage of this work is its simplicity as well as it adequate development of the idea from initial observations to intuitively explaining why it makes sense to propose this approach. Weaknesses: The tests carried out in this paper seem somewhat limited. While the idea is based on state-of-the-art work in the literature [Rice et al., etc.], the results are not compared with these algorithms. Different choices of $\epsilon$ as well as attacks are also needed to show effectiveness of the proposition across a wide range of setups, especially given that the work is a heuristic approach without any theoretical evidence. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1- The tests should be carried out for a couple of different values of $\epsilon$ as well as other standard attacks, to demonstrate performance across a range of settings. In addition to AutoAttack, performance against PGD attacks with large enough #steps (say K= 40 or 100) and small enough step-size (say $2 \epsilon /K)$ is common practice to report. 2- Results in table 1 need to also be compared with other state of the art algorithms for comparison, and not just constant versus adaptive weight decay. 3- As mentioned in the paper, using additional data has shown to be effective in improving robustness. How effective is adaptive decay if also combined with such additional data? 4- denoting the gradient of the cross-entropy loss part of the objective with respect to $w_t$ as $\nebla w_t$ is mathematically wrong (see eq. 5 , 6 ,7). This needs to be corrected for mathematical soundness. Also, it is reflected in line 6 of the Algorithm 1. Instead, one should define a function, say $F(.)$ to denote the cross entropy, and use that for mathematical soundness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you, reviewer c9pV for your insightful comments. Please review our response to some of the questions you asked. > While the idea is based on state-of-the-art work in the literature [Rice et al., etc.], the results are not compared with these algorithms. Thank you for the suggestion regarding comparison with stronger baselines. We do agree about the significance of the work of (Rice et al.) In all of our experiments, we do validation-based early stopping; hence, our conventional WD experiments can be seen as the results from Rice et. al. In addition to that, we compare our CIFAR-100 results to several other SOTA methods in Table 2 of the original submission. During the rebuttal period, we have also included comparisons to SOTA methods on CIFAR-10 which are summarized in Table C1 below: |Dataset|Method|WRN|Aug|Epochs.|Nat|AutoAttack| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |CIFAR-10 (New)|AT (Madry et al., 2017)|32-10|P&C|100|87.80|48.46| |CIFAR-10 (New)|TRADES (Zhang et al., 2019)|32-10|P&C|100|86.36|53.40| |CIFAR-10 (New)|MART (Wang et al., 2020)|32-10|P&C|100|84.76|51.40| |CIFAR-10 (New)|FAT (Zhang et al., 2020a)|32-10|P&C|100|**89.70**|47.48| |CIFAR-10 (New)|AWP (Wu et al., 2020)|32-10|P&C|100|57.55|53.08| |CIFAR-10 (New)|GAIRAT (Zhang et al., 2020b)|32-10|P&C|100|86.30|40.30| |CIFAR-10 (New)|MAIL-AT (Liu et al., 2021)|32-10|P&C|100|84.83|47.10| |CIFAR-10 (New)|MAIL-TR (Liu et al., 2021)|32-10|P&C|100|84.00|53.90| |CIFAR-10 (New)|AWD (ours) with Dog=0.022 + ASAM|32-10|P&C|100|88.55|**54.04**| Table C1: CIFAR-10 robustness comparisons > Different choices of ϵ as well as attacks are also needed to show effectiveness of the proposition across a wide range of setups, especially given that the work is a heuristic approach without any theoretical evidence. We thank you for bringing this to our attention. As you requested, we have run experiments with several values for $\epsilon$. The experimental setup for these experiments is similar to that of Table 1 from the original paper, except for the difference in $\epsilon$ values used for training and evaluation. We run experiments on WRN28-10 for both CIFAR-10 and CIFAR-100. We use a fixed DoG hyper-param of 0.022, as opposed to non-adaptive WD, where we do a grid search and report the best WD per parameter setup according to robust test accuracy measured by conducting AA and PGD with various steps. Table R1 (in general response) shows that AWD outperforms conventional WD training for all epsilons in both natural accuracy and robustness. > 1- The tests should be carried out for a couple of different values of ϵ as well as other standard attacks, to demonstrate performance across a range of settings. In addition to AutoAttack, performance against PGD attacks with large enough #steps (say K= 40 or 100) and small enough step-size (say 2ϵ/K) is common practice to report. Thanks for this suggestion. As requested, we have included our robustness results with various PGD steps for adversarially trained WRN28-10 networks. We use the step size you suggested to evaluate the experiments summarized in Table R1 and Table R2 of our general response. We agree that our method should be tested against several attacks. We used AutoAttack as our primary evaluation metric since it incorporates four of the most well-known and strongest adversarial attacks, including APGD-CE (step size free PGD attack), APGD-DLR (step size free PGD attack with DLR loss), FAB (which minimizes the norm of the adversarial perturbation) (Croce & Hein, 2019), and SQUARE (which is a query efficient black box attack) (Andriushchenko et al., 2019). Our evaluations also show that AutoAttack results in the strongest attacks. > 2- Results in table 1 need to also be compared with other state of the art algorithms for comparison, and not just constant versus adaptive weight decay. Please note that the non-adaptive rows in Table 1 incorporate the early stopping method suggested by (Rice et al.), so they can be interpreted as a comparison to (Rice et al.). Per your request, we included our results to other methods on CIFAR10 in our response above (Table C1). > 3- As mentioned in the paper, using additional data has shown to be effective in improving robustness. How effective is adaptive decay if also combined with such additional data? As mentioned in our paper, our method is most effective in settings that suffer from [robust] over-fitting. If we have more training data to fit, less overfitting happens. As a result, our method is less likely to be as effective in such settings, and the gap between traditional WD and AWD tightens, and their robustness is more comparable. To further show our point, we train a WideResNet 28-10 with Swish activation, with a batch size of 1024 for 800 epochs, with the extra data similar to the experiments in Table 2 of (Rebuffi et al.). Since sweeping the hyper-parameters would require considerable resources, we used the hyper-parameters explained in (Rebuffi et al.) as our hyper-parameter for the WD experiment. To estimate a good hyper-parameter for the AWD experiment, we monitored the average (DoG) value during the training of the non-adaptive method. Table C2 compares the performance of WD and AWD with extra data. As seen in the table, the adaptive method achieves results comparable to the non-adaptive method, and the gap between the performance of the two methods is closing. |Name|Lambda/DoG|Natural Acc|AutoAttack| |:-:|:-:|:-:|:-:| |(Rebuffi et al.)|WD=0.0005|89.42|63.05| |(Rebuffi et al.) + AWD|AWD=0.18|**90.53**|**63.55**| Table C2: Performance of AWD with more data. > 4- correctness and fixes to formulations Thanks! We will update the next version of our manuscript to reflect your suggestions. --- Rebuttal Comment 1.1: Title: Response Comment: I would like to thank the authors for their response. The above has addressed most of my concerns, and I have slightly increased my rating score.
null
null
null
null
null
null
Learning Mixtures of Gaussians Using the DDPM Objective
Accept (poster)
Summary: The authors propose to leverage the DDPM objective to learn Mixtures of Gaussians and prove that gradient descent on the DDPM objective can efficiently recover the ground truth parameters of the mixture model under certain assumptions. Strengths: Several interesting insights are revealed, such as those associated with Eq. (5) and those related to large/low noise levels. Weaknesses: (1) The novelties of the proposed method over existing works are not clearly stated. For example, in Line 52, it's stated that ``EM achieves the quantitative guarantees of Theorems 1 and 2 Daskalakis et al. (2017); Xu et al. (2016);...'' So what are the advantages of using the DDPM objective? (2) It's not clear how practical the proposed method is for general Mixtures of Gaussians with $K\ge 3$ components, where a global optimum is generally impossible for maximizing the loglikelihood [1]. Note that a contradiction between the presented paper and [1] might emerge here, because minimizing the DDPM objective is equivalent to maximizing a hierarchically constructed lower bound of the loglikelihood [2]. (3) No empirical experiments are given. [1] Chi Jin, Yuchen Zhang, Sivaraman Balakrishnan, Martin J Wainwright, and Michael I Jordan. Local maxima in the likelihood of Gaussian mixture models: Structural results and algorithmic consequences. Advances in neural information processing systems, 29, 2016. [2] Calvin Luo. Understanding Diffusion Models: A Unified Perspective. Arxiv, 2022. Technical Quality: 1 poor Clarity: 3 good Questions for Authors: Please address the questions in the above ``Weaknesses.'' Minor. (1) "poweri" in Line 156 is a typo. (2) "Gaussian" in Line 214 is a typo. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 1 poor Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and address them point-by-point. We believe that the criticisms stem from misunderstandings about the scope of what is known in this literature and its relation to our work. We have also provided numerical experiments in the rebuttal, though we also clarify below a broader point about the role of experiments in theoretical works. Given that the stated concerns can all be addressed, we strongly urge the reviewer to reconsider their score. **EM vs DDPM:** - Our work already points to one sense in which DDPMs outperforms EM. The latter is known to suffer from poor global convergence, and provable guarantees for $K > 2$ require a warm start. In contrast, our theory suggests that the different noise regimes of DDPM inherently allow for a two-stage algorithm in which the high-noise regime leads to a spectral initialization close to the true parameters, and the low-noise regime mimics EM from this warm start. **Furthermore, we see this advantage borne out in the numerical experiments performed for the rebuttal.** - That said, note that the question of what advantages DDPM affords over EM is very much outside of the scope of our work. We re-emphasize that prior to our work, there were *no known end-to-end* provable guarantees for using the DDPM objective for distribution learning, and one of the key conceptual contributions in our paper is to establish a formal connection between minimizing the DDPM objective and running EM. This connection is highly nontrivial, and given the significant attention that EM has received in the last five decades and the significant recent attention that DDPMs have received, we believe this will be of strong interest to the statistics and generative modeling theory communities moving forward. **Maximizing log likelihood:** - We believe the reviewer may be somewhat confused. The result of Jin et al. says that 1) there are bad local maxima for the log likelihood, and 2) EM with *random* initialization will converge to bad critical points with high probability for $K > 2$ components. 1) is irrelevant to our paper, and 2) does not contradict anything in our paper, because in the $K > 2$ case, we only consider EM initialized in a neighborhood of the true parameters. In fact, as discussed above, we expect that a more sophisticated analysis of DDPMs in which we combine both high- and low-noise regimes will allow us to obtain a (non-EM-based) initialization close to the true parameters via the connection to power method in the high-noise regime, and then use EM-like updates in the low-noise regime to refine our estimate. - In any case, whether or not DDPMs exhibit global convergence for $K > 2$ is well outside the scope of our paper: we are giving the *first ever* provable guarantees for DDPMs for Gaussian mixtures. In contrast, it is a major open problem in the distribution learning literature to show that *any* algorithm not based on method of moments can learn for $K > 2$ without a warm start. **Experiments:** - We have performed a set of numerical experiments to validate our theory, as part of the rebuttal. Specifically, it shows that the constant factors in our analysis are benign so that our convergence guarantees are practical, and it demonstrates the utility of training using the DDPM objective in both high- and low-noise regimes. - That being said, we would like to make a broader point. Please note that this is a theory paper. The main contributions are a set of mathematical theorems. Experiments are orthogonal to the thrust of our work, and indeed orthogonal to the thrust of recent *theoretical* developments for diffusion models. The field of diffusion generative modeling is inundated with experimental works, and it is well-known that the algorithms used in this field are empirically successful. In the context of our work, there is not much to validate experimentally that would add to the empirical literature. - In contrast, our theoretical understanding of diffusion models is seriously lacking, apart from a handful of works written over the last year which we have cited in Section 1.1. The whole point of our work is to give a rigorous theoretical justification for why these existing algorithms are effective.
Summary: This paper shows that the diffusion model can be used to learn mixtures of Gaussians. In particular, they show that GD on the DDPM objective can efficiently recover the ground truth parameters for both mixtures of two spherical Gaussians and mixtures of $K$ spherical Gaussians under different assumptions. The key ingredient in their analysis is the observation that there is an inherent connection between score-based models and EM, spectral methods. Strengths: This is a very interesting paper and can be served as a starting point to further explore the power of DDPM from the optimization perspective. Firstly, mixtures of Gaussians are one of the most important and fundamental distribution families of interest, and showing DDPM works for mixtures of Gaussians can be very strong support for its solidness. On the other hand, different from prior work, this work does not require the existence of an oracle for score estimation. Instead, they show that empirical estimation suffices. Lastly, the connection between DDPM and the classic algorithms in this field, i.e., EM and spectral methods can be of independent interest. This might be used to show the effectiveness of DDPM. Weaknesses: My main concern is that the model and algorithm are a little bit artificial. - First, the model used for mixtures of two Gaussians is $s_{\theta_t}(x)=\tanh \left(\mu_t^{\top} x\right) \mu_t-x$. Here $-x$ can be understood as the residual connection and $\tanh(\cdot)$ is also a reasonable activation. However, using the same weight $\mu_t$ for both layers is a little bit restrictive. A far more general model is $s_{\theta_t}(x)=\tanh \left(u_t^{\top} x\right) v_t-x$. My conjecture is that it is not hard to extend the current result to this "asymmetric" version. The reason is that there might exist some kind of balanceness between $u_t$ and $v_t$ so that they are roughly the same. - Second, the authors consider two noise regimes: first they use a large noise and run several iterations of GD, then they switch to the small noise regime. This mimics the practical noise regime. However, in practice, people decrease the noise level in a continuous manner, like exponentially decaying. Is it possible to adapt the current analysis to a more realistic noise-decaying regime? Some technical issues: - When claiming high noise regime, only the upper bound is provided, see line 252. I think there should be also a lower bound for $t_1$. Last question: The result in this paper only show that DDPM is as good as EM and spectral methods. A far more important question is why empirically DDPM outperforms these methods. Can DDPM outperforms the existing methods for the mixtures of Gaussians in some aspects like computational time? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the weakness part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please see the weakness part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the encouraging words and for finding our paper very interesting. Here we address the questions raised, one of our main points being that while the points raised are indeed great directions for future work, we would like to underscore that prior to our work, there were no provable score estimation results in the theoretical literature on distribution learning. We thus encourage the reviewer to reconsider their score and the significance of our contributions in this context. **Decoupling the two layers' weight in the score function** It’s an interesting question. It should not be hard to show some kind of layerwise training guarantee at least: if the output layer weights are fixed at random initialization and the input layer weights are trained, one should be able to show that the input layer weights converge to the ground truth because of similar power iteration / EM - like considerations, after which if we train the output layer weights, then this is simply a linear problem and we will likewise converge to the ground truth. Extending this beyond layerwise training will require more work. Nevertheless, we find that the novelty of showing that *any* gradient-based algorithm can be used for provable score estimation in a nontrivial high-dimensional distribution learning context perhaps outweighs the strengths of specific architectural choices, unless if the architecture is chosen to be much more generic, e.g. a general feedforward O(1)-layer network. This would, however, be outside the scope of our work. **Continuously decaying the noise** The reviewer is correct that our two-stage algorithm is a little different from how DDPMs are trained in practice, but the objective used in practice is actually a little different from what the reviewer describes, and we feel that understanding the former, while of great interest, is also well out of the scope of what is currently possible in the literature on provable score estimation. For context, in diffusion models, one trains a score network $s(x_t,t)$ which accepts as input (an embedding of) a noise level parameter $t$ and an image $x_t$ at noise level $t$, and outputs a denoised image. To sample an image, one then starts with a fully noisy, e.g. Gaussian, seed and runs a discretization of a suitable reverse SDE/ODE that iteratively applies the score network along some noise schedule as the reviewer suggests. We caution that this should *not* be confused with the following: train the score network by running GD with respect to the DDPM objective at noise level t_1, then with respect to the DPPM objective at noise level t_2, etc. Instead, how one trains $s(x_t,t)$ is by running GD on a DDPM objective which is averaged across DDPM objectives at different noise levels. See e.g. Eq. (8) in https://arxiv.org/pdf/2206.00364.pdf. There is flexibility in how to choose the distribution D over noise levels. Perhaps what the reviewer had in mind is that D in practice is a continuous distribution over noise levels, whereas our analysis, as written, pertains to D which is discrete. We agree that the former is a very interesting and practically relevant direction for study. It would however require first defining a suitable architecture which incorporates a time embedding. This already takes us quite far from the domain of settings where we can hope for an end-to-end analysis of gradient-based methods, and given that nothing was known about provable score estimation for distribution learning prior to our work, we think it is fair to regard this as a topic much further down the line in terms of things that we can currently analyze. **Lower bound for $t_1$** Yes, this is a typo that we will fix in the revision. The noise scale t_1 should be $\Theta (log d)$. The lower bound on noise scale $t_1$ is used for obtaining equivalence between the power method and Gradient Descent on the diffusion model (See Lemma B.3) because the approximation error between gradient descent on the DDPM objective and power method depends polynomially on \norm{\mu_t} and \norm{\mu_t^*}. Additionally, the parameters at a noise scale t is $\mu_t = \mu \exp(-t)$ and when t is $\Omega (\log d)$, then $\| \mu_t \|$ is $O(1 / \mathrm{poly}(d))$. Similarly, we also obtain $\|\mu_t^*\|$ to be $O(1 / \mathrm{poly}(d))$ for $t = \Omega (\log d)$. Therefore, the approximation error between the power method and Gradient Descent on the DDPM objective becomes small $O(1/poly(d))$. **Outperforming other methods:** While we think it is already a compelling contribution to even be able to analyze gradient descent on the DDPM objective, we stress that one of the key takeaways of our work is precisely that our algorithm gets the best of both worlds of power method and EM. Note that the main limitation of EM is that it does requires a warm initialization in order to converge, and the main limitation of power method is that while it can converge from random initialization, it does not converge very quickly even when it is in a neighborhood of the ground truth. In contrast, we have shown that by varying the noise level with which one defines the DDPM objective, we get the advantages of both algorithms: 1) the large noise regime gives us the global convergence properties of power method, and 2) the small noise regime gives us the fast local convergence properties of EM. In fact we see precisely this phenomenon in our second set of experiments. --- Rebuttal Comment 1.1: Title: Thanks for your reply Comment: Thanks for your wonderful response, which addresses all of my concerns! To encourage your effort on this interesting direction. I will increase my score from 6 to 7.
Summary: This paper presents a new approach to learning Gaussian mixture models using the denoising diffusion probabilistic model (DDPM) objective. The authors provide a 2-part algorithm that allows one to reconstruct the parameters of a mixture of Gaussians. The first part of the algorithm uses gradient descent with "high nose" scale of $t$ and random initialization to learn parameter, correlated to the true ones, while the second step refines these parameters, using those obtained in the previous step as a "warm start" and using the small $t$ mode. Strengths: The paper gives the relation between gradient descent with loss from denoising diffusion probabilistic model and a) power iteration in case of large $t$ and b) Expectation-Maximization algorithm in case of not large $t$. Some number of theorems and lemmas are proved. Weaknesses: 1. The presentation of the results is weak. The article is poorly structured, with no "conclusions" or "discussion" section. A lot of fuzzy language. Such wording may be acceptable in informal descriptions of results, but even in the section with formal descriptions phrases like "with high probability" are used. The main text of the article in the main pdf file differs from the version in the file with the Appendix. 2. The matrix $(1-\eta r)I +2\eta \mu^*_t {\mu^*_t}^T$ has eigenvalues close to one. The largest of them (for which the eigenvector in power iteration (6) is sought) differs only by an additional term proportional to $e^{-2t}\eta$. This term is very small, since by definition $t$ in this case is large, and the learning rate $\eta$ is usually chosen very small ($\sim10^{-2}-10^{-4}$). In such a situation power iteration can converge to an arbitrary vector. Moreover, as expression (6) is an approximation, this convergence is even more uncertain. 3. There are no numerical experiments, even for the simplest cases. However, the real convergence of power iterations with the matrix (7) is not obvious. 4. There is no separate comparison with other, including state-of-the-art, approaches to recovering Gaussian mixture parameters. Although paper has rather large literature review section. Minor: line 156 typo: "poweri iteration" "EM" abbreviation is deciphered only on page 6, but widely used starting from the abstract. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: 1. Have you performed numerical experiments to check the convergence of the method and compare it with other approaches? 2. In reality, for some model numerical parameter values, how many iterations are needed for power iteration described in (6)--(7) to converge? 3. line 246: (Sec. 2.1 with Formal statements) What is "high probability"? Minor What authors mean by "well-separated" (line 131) What authors mean by _student_ network (line 142) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: 1. Only rough asymptotic estimates are given. Real constants, which can be in practical experiments are not given, however, they can be large. 2. The proof for the case of large $t$ (the relation of DDPM to power iteration) is based on an approximate, not exact, formula (6) and thus it is not conclusive. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and we address all the points raised by the reviewer. As the main weakness mentioned by the reviewer on the convergence of power method seems to be due to some misunderstanding and is thoroughly addressed both through theory and experiments that we performed for the rebuttal, we ask the reviewer to reconsider their score. - **Fuzzy language with informal phrases like "with high probability":** It is completely standard in the literature of learning theory to use “with high probability” to denote “with failure probability inverse polynomial in d” (for example, see Daskalakis et al. 2017, Diakonikolas et al. 2020, Liu-Li ‘22). The reviewer has not provided any other instance of “fuzzy” language, and we disagree that aspects of the paper are “fuzzy”. We will also make sure to include a conclusions section, but apart from this, we respectfully disagree that the presentation of the results is “weak” or “poorly structured.” - **power iteration can converge to an arbitrary vector** We strongly disagree that the convergence is uncertain. If a matrix’s top eigenvalue is larger than the others by some multiplicative $1+\tau$, then in $O(1/\tau)$ power iterations we will converge to the top eigenvector (see e.g. https://en.wikipedia.org/wiki/Power_iteration). So even if $\tau$ is small, as long as $\tau$ is inverse polynomial in the relevant parameters (which is the case for our $\tau = e^{-2t}\eta$, as $t$ is only logarithmic and $\eta$ is inverse polynomial), the complexity of power iteration is still polynomial. Furthermore, in Lemma 9 (page 7) and in Lemma B.3, we make completely precise the approximation error in (6) and rigorously show that this error is sufficiently low order that it does not affect convergence. We, therefore, encourage the reviewer to seriously reconsider their view that our proof is “inconclusive.” - **Numerical experiments and numerical convergence of power iterations:** - We have performed a set of numerical experiments to validate our theory, as part of the rebuttal. Specifically, it shows that the constant factors in our analysis are benign so that our convergence guarantees are practical, and it demonstrates the utility of training using the DDPM objective in both high- and low-noise regimes. - That being said, we would like to make a broader point. Please note that this is a theory paper. The main contributions are a set of mathematical theorems. Experiments are orthogonal to the thrust of our work, and indeed orthogonal to the thrust of recent *theoretical* developments for diffusion models. The field of diffusion generative modeling is inundated with experimental works, and it is well-known that the algorithms used in this field are empirically successful. In the context of our work, there is not much to validate experimentally that would add to the empirical literature. In contrast, our theoretical understanding of diffusion models is seriously lacking, apart from a handful of works written over the last year which we have cited in Section 1.1. The whole point of our work is to give a rigorous theoretical justification for why these existing algorithms are effective. - **Comparison of results with other state-of-the-art approaches:** We make a comparison to the most relevant state-of-the-art approaches in the introduction section in Lines 52-55 and in the Related Work section in Lines 104-105. We also state the state-of-the-art EM guarantee of Segol and Nadler which clearly matches our guarantee, though we will add some words in the revision making this clear. In the last paragraph of Related Work, we also mention works on “general mixing weights and covariances” which is clearly more general than our setting. Again, we will add some words in the revision to make this clear. - **Asymptotic analysis and constant factors:** Please recall this is a theory paper where it is standard to use asymptotic analysis and suppress constant factors. Even the prior theoretical works on the diffusion models use asymptotic analysis (e.g., Lee et al. 2023, Chen et al. 2023a, b). And as the experiments we performed for the rebuttal make clear, the constant factors are benign. **Minor comments:** - By well-separated we mean that the means are separated by some absolute constant. We also handle the case of inverse polynomial separation. - Student network refers to the network that we train. This “student-teacher” terminology is standard in the deep learning theory literature, where the “teacher” is the ground truth network generating the examples (in our case, it is the true score function), and the “student” is the network whose parameters are optimized via gradient descent to minimize some training objective. --- Rebuttal Comment 1.1: Comment: I thank the authors for their answers and for additional experiments. **1. About the style of the paper.** The article is missing sections such as "Background", "Problem Statement", "Conclusion", "Discussions", comparison with other algorithms, etc. Of course, they don't all have to be at the same time. But the presented structure of the article is extremely difficult to understand the main ideas (which are quite simple). To correct the structure, to introduce correct definitions, to correct holes in the proofs, etc., requires substantial revisions that go far beyond the minor revisions allowed at this stage of reviewing the paper. **2 About the convergence of power iteration:** > (see e.g. https://en.wikipedia.org/wiki/Power_iteration). a. The Wikipedia article you quoted says > If we assume $A$ has an eigenvalue that is strictly greater in magnitude than its other eigenvalues and the starting vector $b_0$ has a nonzero component in the direction of an eigenvector associated with the dominant eigenvalue, then a subsequence $(b_k)$ converges to an eigenvector associated with the dominant eigenvalue. Without the two assumptions above, the sequence $(b_k)$ does not necessarily converge. Your paper is positioned as a theoretical paper, but you have not checked these two conditions given, which are necessary to prove the convergence of power iteration. The check of the second condition (about the non-zero component) can be omitted, since it is always possible to take several random vectors as an initial approximation. But the condition that the first eigenvalue must be much larger than the others is crucial. The argument about asymptotics here cannot replace a rigorous study, since the numerical coefficients in this asymptotics are not known in advance, and it may turn out that the first eigenvalue is not sufficiently different from the others for convergence of power iteration. > If a matrix’s top eigenvalue is larger than the others by some multiplicative $1 + \tau$, then in power iterations we will converge to the top eigenvector b. Moreover, you don't just have $1 + \tau$. You have $1 + \tau + \epsilon$, where $\epsilon$ appears due to approximate equalities (6)--(7). Is it possible that $\epsilon < 0$, and furthermore that $1 + \tau + \epsilon<1$? Theoretically, it is possible, and in this situation the convergence of power iteration will be to an arbitrary vector. The authors do not prove that this situation is excluded as there is no explicit comparison of these two values in the __main text__ of the paper. (Lemma 9, which turned into Lemma 8 in supplementary file, contains only the expression $poly(\frac1d)$ in the right-hand side, which can be arbitrary large when $d$ is fixed.) **Minor:** >Please note that this is a theory paper. The main contributions are a set of mathematical theorems. Experiments are orthogonal to the thrust of our work, and indeed orthogonal to the thrust of recent theoretical developments for diffusion models. Although the paper is positioned as theoretical, it describes a practical algorithm. If it turns out that in practice the number of iterations of such an algorithm will be equal to $10^9$ at real parameter values, but it will be "asymptotically small", the practical value of such an algorithm is insignificant >By well-separated we mean that the means are separated by some absolute constant. It's still not clear how "well-separated" differs from just "separated" (or "poor separated", etc.). > Student network refers to the network that we train. This “student-teacher” terminology is standard in the deep learning theory literature The concept of "student network" is connected with the concept of "teacher network". You answered that you do not have a teacher network as a network. Moreover, the words "teacher network" do not even appear in the paper. Thus, the concept of "student network" is misleading. **Conclusion.** I think, that each of my two arguments above (about the structure of the paper and about the lack of justification for the convergence of power iteration) is separately a reason to reject the paper. --- Reply to Comment 1.1.1: Title: Response to Reviewer's Comment on the Rebuttal Comment: > you have not checked these two conditions given The first condition directly follows from the eigenvalues of the matrix $Id (1 - 3 \eta || \mu_t ||^2 ) + 2 \eta \mu_t^* \mu_t^{\ast T}$ and is written in Lemma B.5. The first eigenvalue (corresponding to $\mu_t^*$) is $(1 - 3 \eta || \mu_t ||^2 ) + 2 \eta || \mu_t^* ||^2$, and the rest of the eigenvalues are $(1 - 3 \eta || \mu_t ||^2 )$. The second condition (written in Lemma B.4) is a direct consequence of the anti-concentration of the Gaussian random variable and does not require taking several random vectors as an initial approximation as mentioned by the reviewer. > Is it possible that $\epsilon < 0$, and furthermore that $1 + \tau + \epsilon < 0$? Theoretically, it is possible, and in this situation the convergence of power iteration will be to an arbitrary vector. No, $\epsilon$ is the approximation error when comparing gradient descent on the DDPM objective to the power method and it cannot be negative. We have also proved in Lemma B.5 and Lemma B.6 that the approximation error is sufficiently small such that it does not disturb the convergence of the power method. It is simply incorrect to say we do not carefully keep track of these issues. > contains only the expression $poly(\frac{1}{d})$ in the right-hand side, which can be arbitrarily large when $d$ is fixed. The parameter $d$ is never fixed. For decades, $d$ has been the key asymptotic parameter of interest when learning mixtures of Gaussians. For example, if the failure probability of an event is $O(\frac{1}{d})$, then the failure probability of that event is treated as small (See cited literature Daskalakis et al. 2017, Diakonikolas et al. 2020, Kwon et al. 2020, Liu-Li ‘22). The reason is that in the regime where $d$ is a fixed constant, all of the learning problems considered are trivial: in this case because $d = O(1)$, brute-force enumeration over an epsilon-net suffices. > It's still not clear how ‘well-separated’ differs from just ‘separated’. We consider two regimes of separation in the paper for mixtures of two Gaussians. 1) the centers are separated by absolute constant or 2) the centers are separated by $poly(1/d)$. By well-separated centers, we mean the centers separated by an absolute constant and by poorly-separated centers, we mean the other case. To summarize, we strongly disagree with the reviewer’s repeated claims that there are “holes in the proof”, “the proof is based on an approximate formula” and “is not conclusive”. We have answered all the questions raised by the reviewers and are happy to discuss in further detail.
Summary: Proofs are given for showing that the true mean parameters of Gaussian mixture models (GMMs) with identity covariance matrices can be recovered when using gradient descent to optimize the DDPM objective. It is argued that it is not well understood if score-based models can provably estimate the parameters of the data distributions, which is the objective of this submission. The findings and results are connected to the EM algorithm. Strengths: The paper attempts contribute with fundamental understanding of diffusion models, focusing on an important class data distributions, GMMs. Diffusion models are clearly of high interest to the NeurIPS community and theoretical results of the type provided in the submission could be significant, or of interest to researchers in the field. The literature review is thorough and it appears that the topic of the paper is original, in the sense that the theoretical results do not exist in the literature. It is good that the results are connected to other well-studied algorithms, like the EM algorithm. The clarity of the purpose of the submission benefits from the bullet list in the abstract. Weaknesses: The structure of the submission is unfortunately not very strong, which makes the arguments in the submission tedious to follow. I am missing sufficient background to the power method and the EM algorithm, a background of the former is not provided, and there is no conclusion section or discussion. Such sections would help the reader to understand the contributions of the submission in terms of the lingo and terminology introduced in the paper. Also, it would give the paper a more concise ending, as it now ends with a proof sketch. My score of the paper is mainly based on the structure and quality (presentation) of the paper, which I believe is below acceptable for a NeurIPS paper. However, my confidence level is relatively low as theoretical results for diffusion/score-based models is slightly outside my area of expertise. That is, am not sure how significant the results in the submission are--hopefully other reviewers can aid to evaluate this part. Nonetheless, as it stands, my opinion is that the inadequate presentation of the work outweighs (my understanding of) the significance of the results in the presented work. To help the authors in the revision of the paper, I below enumerate some suggestions on how to improve the structure and some of my concerns. I hope they are helpful. * GD is frequently used to denote gradient descent. But it is not defined as an acronym in the abstract nor in the paper. I inferred the definition of GD only after reading quite a bit of the paper. This confused me, as I did not know what GD referred to. I suggest introducing the acronym in the abstract and write gradient descent (GD) in line 35 where GD is first used. * In line 35, what is the meaning of a "natural" data distribution? I could find an explanation of it and I could not derive what it ought to specify. Is "natural" an important distinction? * Sentences like "we use this as our student network architecture when running gradient descent." (line 146) gives me expectation that some experiments have been carried out where these architectures have been used. As there are no experiments in the paper, this confused me. A potential reformulation could be "one could use" as you have in fact not used it. Similar confusion arouse at line 162 "we run projected gradient descent". * Related to the previous bullet, why not run experiments to validate the results on synthetic data? * Is the heading in line 164 supposed to be a paragraph or a section? Now it is an empty paragraph. * I am not familiar with the power method and no background or explanation of it is given in the paper. I think this needs to be included to make the paper more stand alone. * Projected gradient descent is introduced after the background section without proper explanation. Is projected gradient descent typically used to train diffusion models? * The EM algorithm is central to the findings in the paper. As such, I think it should be more carefully explained. Especially, including a Background section earlier than Sec 1.3 would strengthen the paper and make it easier to follow the arguments made in the earlier sections. For instance, "We show that the gradient descent in this "small noise" regime corresponds to the EM algorithm" in line 158 would become easier to understand. * In the setting where $K=2$, is there an additional, consistent assumption that the means of the Gaussian are centered around the origin? I.e. the means are $-\mu^*$ and $\mu^*$? I read the section above Eq. (2) as this being an example of a GMM "for the sake of intuition". But then this parameter setting is specified again in Eq. (13). Does this mean that the results in the paper are constrained to Gaussians centered around the origin? If so, this should be specified in the relating theorems? I.e. Thm 1 and 2 and their formal counterparts. **Minor issues** * line 46: the flow if the sentence could be improved by using punctuation after "$d$, the dimension." * The acronym EM is most often used without definition. Although this is more clear what it refers to, "expectation-maximization" is spelled out in other parts of the paper. I suggest introducing the definition and sticking to it throughout the paper. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: * In Sec. 1.2, it would be nice to see a discussion or explanation of why the covariance matrices Id and not $\sigma^2$I_d. That is, the variance elements would not necessarily equal to one, but the Gaussians would still be spherical. Would your results still hold for this setting? Otherwise, shouldn't the setting itemized in the abstract specify that your results apply to unit-spherical Gaussians? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 3 good Limitations: I had some questions on the assumptions of the parameters of the Gaussians in the GMMs. Otherwise I believe the limitations were addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful suggestions. We will make sure to include a discussion/conclusion section in the revision. Here we address their other points one-by-one. Given that it is straightforward to fix all of these concerns in a single round of cosmetic edits, and given that the only non-expository weakness mentioned stems from a misunderstanding about the generality of the origin-centering assumption, we don’t think that such a low score is appropriate and respectfully urge the reviewer to reconsider. **Weaknesses:** - *Confusion on GD:* In the updated draft, we will clarify that GD stands for gradient descent. - *The meaning of a "natural" data distribution:* The question in line 35 is intended to be informal, and in ML theory, “natural” is a commonly used term to signify a generative assumption which is “simple” and “non-contrived.” Given that Gaussian mixtures are one of the most studied distributions in the literature for provable algorithms for distribution learning, “natural” is a reasonable term in this context. - *Expectations about the experiments on the student network:* We are happy to incorporate the reviewer’s suggestions to avoid this confusion, but note that in theoretical works, it is standard to say “we perform algorithm X to solve task Y” without the literal implication that experiments will be done. For instance, if a paper says “we will maintain the following data structure while processing the edges of graph G,” it need not mean that they will carry out an experiment in which they implement said data structure. - *Experiments*: see below for a summary of the experiments we performed for the rebuttal, as well as a separate discussion on the role of experiments in theoretical works - *Is the heading in line 164 a paragraph or a section?:* It is a paragraph as intended. We will reformat the bolded subheaders beneath it as underlined or italicized subheaders to minimize confusion. - *Background on power method:* The power method is taught in introductory algorithms and linear algebra classes. We will add a paragraph reminding the reader of the simple intuition (to compute the top eigenvector of a matrix, repeatedly multiply a vector by the matrix, and the result will converge in angular distance to the top eigenvector). - *Background on projected gradient descent:* It is very common in deep learning to optimize with an L1/L2 regularizer on parameters, which implicitly plays the role, in a sense that can be made formal, of a projection step. Given that regularized gradient descent is thus a standard algorithm not just in diffusion modeling but in machine learning more broadly, and furthermore given that we explicitly state in Line 162 the set that we are projecting to, it is unclear what other aspects of projected gradient descent should be explained at that point in the text. - *Additional background on EM algorithm:* EM for Gaussian mixtures is taught in introductory machine learning at the undergraduate level. We will add some additional details to the paragraph starting in Line 58. Namely, we will include a centered equation reminding the reader of the form of the “M” step in the EM algorithm, and then elaborate on what we mean when we say that the gradient updates when minimizing the DDPM objective are performing this M step. - *Origin-centered mixtures of Gaussian:* We refer the reviewer to Eq. (13) where we make this assumption clear (When we said “For the sake of intuition” in Section 1.2, we meant that we focus on *K = 2* for the sake of intuition, not that we focus on origin-centered means). We stress that the origin-centered mean assumption is not a “constraint”, it is actually without loss of generality as mentioned in Line 192, 193 (just after Eq.(13)). One can always estimate the overall mean of the data distribution to high accuracy from samples and then recenter the dataset accordingly; one can check that the steps of our proof are robust to the error in estimating this mean. **Minor issues:** - We will use the suggested punctuation - We will make clear early on that EM stands for expectation-maximization **Questions:** - Yes, our results still hold for this setting. The reason is that if you scale the dataset by a factor of 1/sigma, you reduce it to the unit-spherical case. It is however true that if the covariance matrices have different variances across the different mixture components, then our analysis does not yet apply. Note however this should not be viewed as a weakness per se: it is common for works to focus on the equal covariance setting (see e.g. the state-of-the-art works of [Segol-Nadler ‘21] and [Liu-Li ‘22]). We will make this clear in the revision and reword the abstract accordingly. **Experiments:** - We have performed a set of numerical experiments to validate our theory, as part of the rebuttal. Specifically, it shows that the constant factors in our analysis are benign so that our convergence guarantees are practical, and it demonstrates the utility of training using the DDPM objective in both high- and low-noise regimes. - That being said, we would like to make a broader point. Please note that this is a theory paper. The main contributions are a set of mathematical theorems. Experiments are orthogonal to the thrust of our work, and indeed orthogonal to the thrust of recent *theoretical* developments for diffusion models. The field of diffusion generative modeling is inundated with experimental works, and it is well-known that the algorithms used in this field are empirically successful. In the context of our work, there is not much to validate experimentally that would add to the empirical literature. In contrast, our theoretical understanding of diffusion models is seriously lacking, apart from a handful of works written over the last year which we have cited in Section 1.1. The whole point of our work is to give a rigorous theoretical justification for why these existing algorithms are effective. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: First, I want to thank the authors for their rebuttal, they have clearly worked hard to address my comments. I also acknowledge that I have taken part of the submitted pdf, thanks for the complementary experiments. However, I disagree that the presentation edits are "straightforward" and adding one or two sections cannot be regarded as a "single round of cosmetic edits". To use the terminology of the NeurIPS reviewer guidelines (https://neurips.cc/Conferences/2023/ReviewerGuidelines#:~:text=well%2Dwritten%20summary.-,Strengths%20and%20Weaknesses,-%3A%20Please%20provide) *Quality: Is this a complete piece of work or work in progress?* I still think this is a work in progress, a view I appear to share with the other reviewers. *Clarity: Is the submission clearly written? Is it well organized? Does it adequately inform the reader?* I am sorry to conclude that these criteria are not met. Even if the promised revisions will certainly improve the clarity of the paper, I cannot increase my score to an acceptance level before reading the revised version of the paper with the added section(s). Instead I will increase my score to 4, displaying my appreciation of the efforts put in during the rebuttal, will keeping my low confidence score. I hope the authors will understand my reasoning. I want to be clear that I appreciate the improvements of the submission that have been made during the rebuttal period, and I applaud the authors for their efforts. By incorporating the feedback provided by the reviewers, a resubmission of the work will be competitive for acceptance in a future conference proceeding.
Rebuttal 1: Rebuttal: We thank the reviewers for their comments, which we have responded to individually. Here we note that we also performed numerical experiments (see attached pdf) to demonstrate that 1) the **constant factors in our analysis are quite benign**, and 2) as predicted by our theory, **training with the DDPM objective at different noise scales has a tangible benefit over power iteration and EM**, namely it gets the best of both worlds of power iteration and EM: global convergence from random initialization for the former and fast local convergence for the latter. *In our individual responses to reviews, we also included some broader comments on the role of experiments in theory, especially in the theory of generative modeling where there is a wealth of empirical works and dearth of theoretical understanding.* Pdf: /pdf/57e8c0e645fd7e6faff1d2459d9521b23bdb8838.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Cross-Episodic Curriculum for Transformer Agents
Accept (poster)
Summary: This paper presents Cross-Episodic Curriculum (CEC) to boost the learning efficiency and generalization of Transformer agents. Specifically, CEC places the cross-episodic experiences into a Transformer’s context, which forms the basis of a curriculum. The authors also provide three concrete curriculum implementations: Learning-progress-based curriculum, Task-difficulty-based curriculum and Expertise-based curriculum. Experiments on discrete control, such as in DeepMind Lab, and continuous control, as seen in RoboMimic, show the superior performance and strong generalization of CEC. Strengths: 1. Simple but effective idea for Cross-Episodic Curriculum. 2. Three concrete curriculum implementations for different settings (RL or IL). 3. Good paper writting. It is easy to follow the methods and experiments. Weaknesses: 1. My major concern is the baselines used in the experiments (and also the related works). Specifically, the authors only compare the Transformer-based BC agent, which is a weak baseline. The reviewer will encourage the authors to compare with more recent methods like Algorithm Distillation [1] and TIT [2], or discuss the reason why these methods are not suitable for comparision. (The reviewer guesses that BCQ and CQL are based on MLP?) 2. Some important details are missed in the experiments. For example, in the ablation study, the reviewer (and most readers I guess) would like to know how to implement the "without cross-episodic attention" and "Curriculum granularity", but the authors put these important information in the Appendix. [1] In-context Reinforcement Learning with Algorithm Distillation. ICLR 2023. [2] Transformer in Transformer as Backbone for Deep Reinforcement Learning. 2022. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Could the authors compare with more recent methods like Algorithm Distillation [1] and TIT [2], or discuss the reason why these methods are not suitable for comparision? 2. The key insight of CEC is that sequential cross-episodic data manifest useful learning signals that do not easily appear in any separated training episodes. Besides the better performance, could the authors provide more evidence for this insight? 3. Why Learning-progress-based curriculum is worse thanTask-difficulty-based curriculum in most evaluation environments? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: It will be better to discuss which curriculum is suitable for which environment setting? Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. We address your concerns in detail below and will update our paper accordingly. > Compare with more recent methods We compare against two more relevant baselines. Please refer to [global response](https://tinyurl.com/4bc2469z) for comparison and discussions. > Differences between AD First, we would like to highlight that we have different focus and scope. AD [1] focuses on learning a meta RL agent that demonstrates in-context improvement during test time without gradient updates. Instead, we focus on how to improve the data efficiency for Transformer-based agents by explicitly formulating data in curricular sequences. To achieve this goal, we propose and validate three solutions in both RL and IL settings, which are underexplored in AD. Second, we admit that our learning-progress-based curriculum bears a resemblance to AD at first glance, but they still differ in several angles, as detailed below. - On data generation. AD requires **N** different single-task source agents to generate data, which amounts to N different copies of model weights. This is evident in the second requirement and line 1 in Algorithm 1 in the AD paper. However, our learning-progress-based curriculum makes minimal assumptions about the diversity of source agents. In fact, we only require a single multi-task agent to generate training data. This makes our method more applicable since it requires less storage and time. - On evaluation. AD is supposed to only work with a single task instance at test time. This is evident in line 12 in Algorithm 1 in AD paper. In other words, every time the environment is reset, the agent will be spawned at exactly the same location. Goal location and other environmental transitions also keep the same for all evaluation episodes. However, our agent is evaluated with new and different task instances every episode, meaning that the agent's initial location, goal location, and other factors change every test episode. Therefore, our test setting is **strictly harder** than AD’s. Due to the above two differences, we do not think our work is directly comparable to AD. We will still conceptually compare our learning-progress-based curriculum against AD and clarify in the final version. > Differences between TIT As pointed out in their paper, TIT [2] mainly focuses on designing better deep networks for RL. Albeit important, this is still orthogonal to our focus and not directly comparable. Indeed, TIT is complementary to our CEC method. It will be a promising avenue to combine CEC with TIT in future work. We will cite and discuss TIT, and conceptually compare in the final version. > Experiment details For the variant w/o cross-episodic attention, the attention span is restricted to the ongoing episode. This is achieved by only feeding single-episode data during training time, and clearing up KV caches every environment reset at test time. Regarding curriculum granularity, we approximate it with the number of different difficulty levels and their intervals. For example, a curriculum with fewer difficulty levels is considered to be more coarse. One consisting of levels with larger intervals is considered to be more coarse. We will clarify in the final version. > More evidence for this insight [behind the proposed method]? To qualitatively investigate the policy difference, we visualize the test-time behavior. Videos are provided [here](https://tinyurl.com/yypwytt). We find that by keeping a task-difficulty-based curriculum in context, the agent is able to learn critical skills such as visual navigation and long-horizon planning from relatively easier tasks, then apply them to the most challenging setting. This is demonstrated by the fact that the agent can navigate to gradually distant goals. Furthermore, a concurrent work [3] provides theoretical support for cross-episodic learning. In multi-arm bandit problems, by conditioning on a distribution over linear bandits (cross-episodic context), the policy can exploit the unknown structure, allowing more informed exploration and decision-making. They show that in-context pretraining over multiple episodes effectively performs posterior sampling — a provably sample-efficient Bayesian RL algorithm that learns faster than source algorithms used to generate the pretraining data. > Why Learning-progress-based curriculum is worse than task-difficulty-based curriculum? Resulting agents’ performance is largely subjected to source agents’ quality. This is supported by videos provided [here](https://tinyurl.com/yypwytt), where we can see agents from learning-progress-based curriculum sometimes make progress toward the goals. This is due to the fact that source agents, which are used to generate training data, also struggle in this task. Nevertheless, agents from learning-progress-based curriculum still perform better than BC agents trained on expert data, suggesting the useful learning information emergent from cross-episodic experience. > Which curriculum is suitable for which environment We introduced three curricula: Learning-progress-based curriculum (L), Task-difficulty-based curriculum (T), and Expertise-based curriculum (E). In RL settings, we recommend starting with L since it is the most straightforward. However, if the task is exceptionally challenging while easier versions of the task exist, we recommend T. In IL settings, if the data has non-uniform quality (e.g., the demonstrator improves over time), we recommend E. **References** - [1] Laskin et al., In-context Reinforcement Learning with Algorithm Distillation, ICLR 2022. - [2] Mao et al., Transformer in Transformer as Backbone for Deep Reinforcement Learning, 2022. - [3] Lee et al., Supervised Pretraining Can Learn In-Context Reinforcement Learning, 2023. --- Rebuttal Comment 1.1: Title: Thanks for the feedback Comment: I appreciate the authors' feedback. It addresses most of my concerns. Although I like the ideas proposed in this paper, it is more important to figure out which scenarios are these methods/curriculums applicable to. Currently, the authors only give a general description for this question. So I would keep my score. --- Reply to Comment 1.1.1: Comment: Dear reviewer, We are glad to know that our reply addressed your concerns. Due to the length limit, we discussed at a high level the last question about which curriculum is suitable for which task setting. In this reply, we will provide more detailed explanations, and welcome your suggestions and feedback on new experiments that can address your remaining concern. In this work, we introduced three curricula: Learning-progress-based curriculum (L), Task-difficulty-based curriculum (T), and Expertise-based curriculum (E). L and T are particularly suitable for RL, while E is suitable for IL. In RL, as what we did in DMLab tasks Goal Maze and Watermaze, we recommend starting with L. Agents resulting from this curriculum learn better visuomotor policies that outperform the DT baseline as well as the BC baseline trained on expert data. However, if the task itself is too challenging such that source algorithms barely make progress, we recommend T. This is justified by our experiment on the DMLab task Irreversible Path. Concretely, the RL agent directly trained on the test difficulty completely failed, which results in the unsatisfactory performance of the agent trained with the L curriculum. However, the agent resulting from the T curriculum learns critical skills from easier settings and applies them to the hardest setting, and hence significantly outperforms the L-curriculum variant, both DT and AT baselines, as well as RL oracles. This is further qualitatively investigated and justified by rollout visualization provided [here](https://tinyurl.com/yypwytt). Feasibility of obtaining each curriculum is extensively discussed in [global response](https://tinyurl.com/4bc2469z). Furthermore, we would like to highlight that similar to many established works that aim for developing novel algorithms for learning visuomotor policies, our discussion on suitability is neither too narrow to lose generality, nor too vague to be hardly practical. For example, authors of [1] also recommend their new method at an abstraction level that is similar to ours, as quoted below. > Recommendations. In general, we recommend starting with the CNN-based diffusion policy implementation as the first attempt at a new task. If performance is low due to task complexity or high-rate action changes, then the Time-series Diffusion Transformer formulation can be used to potentially improve performance at the cost of additional tuning. Nevertheless, we agree with the reviewer that it is important to figure out which scenarios these curricula are applicable to. Since our investigation in RL setting is already comprehensive, **we would like to further investigate the IL setting by also applying the learning-progress-based curriculum on RoboMimic tasks**, due to the fact that it also provides trajectory data generated by RL algorithms with labeled reward. We will update with new results once they are ready. Meanwhile, we welcome the reviewer to provide suggestions and feedback on new experiments that we can further conduct to fully eliminate the last piece of concern. **References** - [1] Chi et al., Diffusion Policy: Visuomotor Policy Learning via Action Diffusion, RSS 2023.
Summary: This work proposes a new method, CEC, to boost the learning efficiency and generalization capability of the agent by structuring multiple episodes for deploying the transformer’s pattern, and sequence recognition capability. The proposed method shows improved performance compared to the baseline and shows generalization capability. Strengths: 1. The authors propose a new approach for the transformer-based agent that leverages the information from the various types of trajectories by deploying the representation capability of the transformer architecture. 2. The proposed method shows knowledge distillation capability through several evaluation and generalization experiments quantitatively. Weaknesses: Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. If we have to manually set a pair of trajectories and their corresponding curriculum levels, it means we know the quality or progress, or expertise of the obtained trajectories in the online RL process (e.g. Table 1, Eq (1)). I think this is a quite strong assumption or extra burden since the RL framework itself should enable the agent to automatically learn how to solve the problem without external intervention. It is not clear whether given prior curriculum sequence is a fair assumption. 2. In line 107, it is not clear the meaning of the notation “n” and “t”. if “n” is the number of entire episodes with various types and “t” is the timestep, then does the transformer not refer to the full horizon of the episode when computing the log-likelihood of the policy? 3. Also, I was wondering how the transformer distinguishes each trajectory’s level or expertise. I think the transformer cannot distinguish it without a learnable token or expertise embedding which works similarly to the position embedding in NLP. If we assume the level is given for embedding, it would be burdensome to collect the accurate level for each trajectory in practice. If the transformer is implemented without these considerations, how can we ensure the transformer leverage the information from various level of trajectories? I think it would be proper to compare with the transformer-based offline RL models such as Decision Transformer [1] or other recently proposed SOTA models to check whether the proposed method just memorizes every sequence pattern in the trajectories or infers something helpful from the curricular data. [1] Chen, Lili, et al. "Decision transformer: Reinforcement learning via sequence modeling." *Advances in neural information processing systems* 34 (2021): 15084-15097. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors do not discuss the potential negative societal impact of their work and do not explicitly discuss the limitations of their method in the manuscript. I do not anticipate ethical issues. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. We address your concerns in detail below and will update our paper accordingly. > [...] It is not clear whether given prior curriculum sequence is a fair assumption. We agree with you that accurately formulating a curriculum is challenging. In this work, we introduce and validate three curriculum designs. We discuss the practicality of obtaining each curriculum one by one and will move the discussion to the main text in the final version. Due to the limited reply length, please refer to the [global response](https://tinyurl.com/4bc2469z) for detailed discussions. Furthermore, to ensure fair comparison, we particularly compare against a new baseline that also assumes prior ordered sequence. We compare against Agentic Transformer (AT [1]), a **concurrent** work with the preprint version released after NeurIPS submission and conference version published at ICML in July. It is closely related to our work by training Transformers on sequences of trajectory **ascending sorted according to their rewards**. Similar to our assumption, this sorting also requires ordered trajectories. In our comparison, we find that our method outperforms AT on difficult tasks, while matching the performance on relatively easy tasks and when probed with new environmental changes. Please refer to the [global response](https://tinyurl.com/4bc2469z) for detailed results and discussions. > Unclear definition of “n” and “t” Thanks for pointing out the confusion. Here “n” denotes the number of episodes, and “t” is the timestep within the current episode. It means that the policy is conditioned on two parts, the first is the historical observation sequence from previous episodes, and the second is the observation received in the ongoing episode, up to the current time step. We will make notations more clear in the final version. > How the transformer distinguishes each trajectory’s level or expertise without a learnable token or expertise embedding [...] It would be burdensome to collect the accurate level for each trajectory in practice. We agree with you that adding expertise tokens or level embedding requires further assumptions. This is one reason behind our design choice not to include them. Instead, trajectory quality is implicitly included. For example, as shown in the visualization videos posted [here](https://tinyurl.com/yypwytt), in successful trajectories, goals are visible in the RGB observations. Therefore, we hypothesize that our model can still implicitly infer trajectory quality by sensing the change in the observation sequence's distribution. > I think it would be proper to compare with the transformer-based offline RL models such as Decision Transformer or other recently proposed SOTA models to check whether the proposed method just memorizes every sequence pattern in the trajectories or infers something helpful from the curricular data. Thanks for the suggestion. We extend the main experiments on DMLab with two more baselines, Agentic Transformer (AT [1]) and Decision Transformer (DT [2]). Due to the limit on reply length, please refer to the [global response](https://tinyurl.com/4bc2469z) for comparison and discussions. > Societal Impact and Limitations Due to space constraints, we have included the Societal Impact and Limitations sections in the Appendix starting from L714. We will include them in the main text in the final version. Regarding the Limitation section, we will elaborate on the feasibility of obtaining curricular data, as discussed in the [global response](https://tinyurl.com/4bc2469z). **References** - [1] Liu and Abbeel, Emergent Agentic Transformer from Chain of Hindsight Experience, ICML 2023. - [2] Chen et al., Decision Transformer: Reinforcement Learning via Sequence Modeling, NeurIPS 2021. --- Rebuttal Comment 1.1: Comment: Thank you for the response. While additional results attached in the pdf are informative, the major concern (first question) is still not alleviated. I read the global response and the author's response is about the feasibility of the data with the curriculum. But my concern is given "prior curriculum sequence" itself. As mentioned in my question, in online RL, the agent should learn how to solve the problem without external intervention. If we have access to the pair of trajectories and their corresponding curriculum levels, then it would be appropriate to experiment with imitation learning and offline RL setting where an assumption of the pre-collected dataset is valid. Thus, I would decide to keep the score. --- Reply to Comment 1.1.1: Comment: Dear reviewer, Thank you for your reply and explanation of your concern. We believe that there exists a misunderstanding of our work’s setting. We apologize for that and would like to clarify. Regarding our RL experiments on DMLab, similar to previous work [1], training data are collected by source RL agents during their **online** learning. Once the dataset is obtained, our method is trained **offline** in a purely supervised manner. We notice that this setting is common [1-4] and the assumption of pre-collected datasets is mild. We indeed apologize that this subtlety is not clearly explained in our initial submission and may lead to misperception of our work. We devote ourselves to clarify in our final version. However, we would like to highlight that this should not diminish the merits of our work. Please don't hesitate to let us know if you have further questions. **References** - [1] Laskin et al., In-context Reinforcement Learning with Algorithm Distillation, ICLR 2022. - [2] Chen et al., Decision Transformer: Reinforcement Learning via Sequence Modeling, NeurIPS 2021. - [3] Reed et al., A Generalist Agent, TMLR 2022. - [4] Lee et al., Supervised Pretraining Can Learn In-Context Reinforcement Learning, Workshop on New Frontiers in Learning, Control, and Dynamical Systems, ICML 2023. --- Rebuttal 2: Title: Follow-up on our response Comment: Dear reviewer, As the discussion stage is ending soon, we wonder if our response answers your questions and our extra experiments address your concerns? If yes, would you kindly consider raising the score? Thanks again for your very constructive and insightful feedback!
Summary: This paper introduces a novel algorithm, referred to as Cross-Episodic Curriculum (CEC), which aims to improve the learning efficiency and generalization capabilities of Transformer agents in multi-task RL settings. The algorithm has been specifically developed to exploit the limited availability of sub-optimal data in environments with a scarcity of data, such as robot learning. The researchers examine the efficacy of Cooperative Evolutionary Computation (CEC) in two distinct and representative scenarios: online RL using 3D maze environments in DeepMind Lab, and imitation learning from human demonstrations of varying quality in RoboMimic. The findings indicate that visuomotor policies, which were trained using the expertise-based curriculum, exhibit the capability to surpass established baselines and achieve superior performance on simulated robotic manipulation tasks. Furthermore, these policies demonstrate significantly better performance compared to agents trained using offline RL algorithms. The researchers reach the conclusion that Cross-Entropy Method (CEC) offers a potentially effective strategy for leveraging restricted yet sub-optimal data in contexts characterized by a scarcity of data, such as robot learning. Strengths: 1. The authors conduct an empirical assessment to examine the efficacy of CEC in two distinct and representative scenarios: online reinforcement learning using 3D maze environments from DeepMind Lab, and imitation learning from human demonstrations of varying quality in RoboMimic. 2. Comparative analysis: The findings indicate that visuomotor policies, which were trained using the expertise-based curriculum, exhibit the capability to equal or surpass established baselines in simulated robotic manipulation tasks. Furthermore, these policies demonstrate significantly superior performance compared to agents trained using offline reinforcement learning algorithms. 3. Extensive applicability: The researchers reach the conclusion that CEC (Contextual Embedding Clustering) presents a promising strategy for leveraging limited yet sub-optimal data in scenarios characterized by a scarcity of data, such as robot learning. Moreover, they assert that CEC is both effective and widely applicable in diverse problem contexts. 4. The open-source code pertaining to the algorithm is provided as supplementary materials and will be publicly accessible to facilitate further research on the learning of Transformer agents. Weaknesses: 1. Restricted comparison: The study presents a comparison between the proposed algorithm and a small number of established baselines. However, it does not encompass a comprehensive range of contemporary algorithms, thereby constraining the breadth of the comparison(e.g.[1], [2]). 2. Inadequate consideration of limitations: The manuscript fails to adequately address the limitations associated with the proposed algorithm or the conducted experiments. This omission may impede the comprehension of the potential drawbacks and challenges that may arise when employing the algorithm in real-world scenarios. 3. How does the proposed algorithm scale to larger and more complex environments? The paper only considers relatively small and simple environments, and it is unclear how well the algorithm would perform in larger and more complex environments, such as those encountered in real-world robotics applications. [1] Piotr Mirowsk, Razvan Pascanu, Fabio Viola, Learning to Navigate in Complex Environments. [2] Siyuan Li, Rui Wang, Minxue Tang, Chongjie Zhang, Hierarchical Reinforcement Learning with Advantage-Based Auxiliary Rewards. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What distinguishes CEC from hierarchical RL? 2. The experimental results presented in section 4 raise two inquiries: (1) What are the reasons behind the comparatively inferior performance of CEC in comparison to vanilla RL when applied to relatively uncomplicated tasks? (2) What factors contribute to the subpar performance of Learning progress? 3. The fundamental concept of CEC, as I comprehend it, is to facilitate the exchange of experiences derived from diverse environments, tasks, and episodes in order to enhance the performance of RL. The aforementioned viewpoint aligns with the fundamental principles of various curriculum RL approaches. However, extensive research has demonstrated that measuring learning progress is a highly efficacious metric. In contrast, the findings presented in this article deviate from this prevailing notion. The authors do not expound upon the specific definition of learning progress, which may be a crucial factor and potentially the primary underlying cause of Q.3. 4. The experimental control group setting described earlier in this paper is deemed insufficient. In addition, it is requested that the authors incorporate a comparative analysis of the curriculum pertaining to environmental change, e.g. PAIRED (Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design). Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors do not discuss the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. We address your concerns in detail below and will update our paper accordingly. > Restricted comparison Thank you for your suggestions. Since [1,2] mainly focus on improving online RL with auxiliary tasks, these are useful to improve the source agents in our case. However, they are orthogonal to the main focus of our work, i.e., to improve the data efficiency of Transformer-based agents that learn in an offline manner. Therefore, we opt to compare against two more relevant baselines and will include a conceptual comparison against [1,2] in the final version. We compare against Agentic Transformer (AT [3]) and Decision Transformer (DT [4]). Please refer to the [global response](https://tinyurl.com/4bc2469z) for comparison and discussions. > Inadequate consideration of limitations Thank you for pointing this out. We have included the Limitations section in Appendix D at L714 due to space constraints in the initial submission. We follow your suggestion to discuss the feasibility of obtaining curricular data. Please refer to [global response](https://tinyurl.com/4bc2469z) for details. > Scale to larger and more complex environments First, we would like to highlight that the environments used in this work, DMLab [5] and RoboMimic [6], are generally considered to be challenging. On DMLab, state-of-the-art algorithms such as IMPALA [7] struggle to make any progress on some difficult tasks. We have shown promising results on difficult tasks like Irreversible Path, which encompasses challenges such as 3D navigation, exploration, long-range planning, etc. Video demos about this task can be found [here](https://tinyurl.com/yypwytt). RoboMimic [6] is a large-scale robotic manipulation benchmark designed to study IL. It consists of demonstrations collected by operators with varying proficiency. Competitive algorithms such as BeT [8] and Diffusion Policy [9] are continuously being developed. Our work falls into the same category to advance the frontier in this challenging robotic benchmark. We leave the experiments on real-world robotics applications to be an exciting future venue. > CEC vs hierarchical RL? CEC and hierarchical RL are two orthogonal concepts. Hierarchical RL refers to a group of algorithms that decompose complex tasks into simpler sub-tasks, while CEC is designed to enhance Transformer agents’ learning efficiency and generalization by leveraging cross-episodic experiences. That being said, CEC is complementary to any hierarchical RL approach, and the combination of CEC and hierarchical RL could be an interesting direction for future work. > Why does CEC perform worse than vanilla RL? Vanilla RL and curriculum RL are directly trained on the test distribution for tens of thousands of episodes until convergence. They are further used as source agents to generate offline training data. Therefore, these two RL agents should be considered as **oracle**. This fact is stated at L178 in our submission. By contrast, our CEC agents are only trained for several hundred episodes generated by them. For our task-difficulty-based curriculum, test tasks are even outside the training distribution (Table 1 on page 5). Even so, our CEC agents perform comparably to and even outperform RL oracles, confirming the superior data efficiency achieved by our methods. > What factors contribute to the subpar performance of Learning progress? To make the discussion more concrete, we visualize different policies on Irreversible Path. Videos can be found [here](https://tinyurl.com/yypwytt). We find that by keeping a task-difficulty-based curriculum in context, the agent is able to learn critical skills such as visual navigation and long-horizon planning from relatively easier tasks, then apply them to the most challenging setting. This is demonstrated by the fact that the agent can navigate to gradually distant goals. Evaluated on the same difficulty, the agent resulting from a learning-progress-based curriculum sometimes makes progress toward the goal. The relatively inferior performance could be due to the fact that source RL agents, which are used to generate training data, also struggle in this task. > The definition of learning progress As mentioned in L77 of our paper, we view all RL tasks in this work as goal-reaching problems. Therefore, learning progress can be defined as the improvement in task success rate or reward over time. We will clarify this in our final version. > Curriculum pertaining to environmental change The field of automatic environment design, albeit interesting, is beyond the scope of this work. Since we focus on improving the learning efficiency of Transformer agents on sub-optimal data, our evaluation has become more comprehensive after we added DT and AT baselines, thanks to your suggestion! Incorporating automatically generated environments could be an exciting future extension of CEC, and we will discuss PAIRED [10] in our final version. **References** - [1] Mirowski et al., Learning to Navigate in Complex Environments, ICLR 2017. - [2] Li et al., Hierarchical Reinforcement Learning with Advantage-Based Auxiliary Rewards, NeurIPS 2019. - [3] Liu and Abbeel, Emergent Agentic Transformer from Chain of Hindsight Experience, ICML 2023. - [4] Chen et al., Decision Transformer: Reinforcement Learning via Sequence Modeling, NeurIPS 2021. - [5] Beattie et al., DeepMind Lab, 2016. - [6] Mandlekar et al., What Matters in Learning from Offline Human Demonstrations for Robot Manipulation, CoRL 2021. - [7] Espeholt et al., IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures, ICML 2018. - [8] Shafiullah et al., Behavior Transformers: Cloning k modes with one stone, NeurIPS 2022. - [9] Chi et al., Diffusion Policy: Visuomotor Policy Learning via Action Diffusion, RSS 2023. - [10] Dennis et al., Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design, NeurIPS 2020. --- Rebuttal Comment 1.1: Title: Acknowledgement of the rebuttal Comment: Thank you for your response, which addressed my concerns. These clarifications should be included in the next version. I will raise my score to 6. --- Rebuttal 2: Title: Follow-up on our response Comment: Dear reviewer, As the discussion stage is ending soon, we wonder if our response answers your questions and our extra experiments address your concerns? If yes, would you kindly consider raising the score? Thanks again for your very constructive and insightful feedback!
Summary: This work aims to study mechanisms of cross-episode attention to effectively learn to improve polices by training on contexts containing gradually improving trajectories. Strengths: On the whole, this paper is well written. The topic of transformers in-context learning as an approach to planing and RL is of increasing importance. This work provides another important datapoint in that space for a technique which has scientific value to try. Weaknesses: The main weakness with this work is its lack of comparison to existing approaches in the space. The empirical results are only compared against approaches that are not intending to take advantage of in-context learning. This looses most of the scientific value of the work when several other prominent works have used the in-context learning capabilities of transformers in various ways to great success. It is known that in-context learning can help with these sorts of tasks, the question which this paper is in the position to answer, is if this way of applying in-context learning works better or worse than the others which have been tried. Even if this approach is importantly distinct from those, it is important to compare against the other natural approaches that have already been tried, such as Algorithmic Distillation or AdA, to understand if this approach to applying in-context learning works better or worse than other approaches. On a similar note, the paper is often vague about how their approach is different from other approaches in the literature. Specifically the distinction between "test time meta-learning" and "generalization across varying test configurations in each episode" is difficult to understand. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: How does your RL approach in the DM labs differ from the general approach from Adaptive Agents Team et al? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The paper could include more discussion of the feasibility of getting data with the requisite ordering of poor data first, better data next, best data last. It could also be useful for the paper to include some mention of the limitations of the imitation paradigm. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. We address your concerns in detail below and will update our paper accordingly. > Lack of comparison to existing approaches Thank you for pointing this out. We compare against two more relevant baselines, Agentic Transformer (AT [1]) and Decision Transformer (DT [2]). Due to the limit on reply length, please refer to the [global response](https://tinyurl.com/4bc2469z) for comparison and discussions. > Differences between Algorithm Distillation Thanks for raising this question. First, we would like to highlight that we have different focus and scope. AD [3] focuses on learning a meta-RL agent that demonstrates in-context improvement during test time without gradient updates. Instead, we focus on how to improve the data efficiency for Transformer-based agents by explicitly formulating data in curricular sequences. To achieve this goal, we introduce and validate three solutions in RL and IL settings, which are underexplored in AD. Second, we admit that our learning-progress-based curriculum bears a resemblance to AD at first glance, but they still differ in several angles, as detailed below. - On data generation. AD requires **N** different single-task source agents to generate data, which amounts to N different copies of model weights. This is evident in the second requirement and line 1 in Algorithm 1 in the AD paper. However, our learning-progress-based curriculum makes minimal assumptions about the diversity of source agents. In fact, we only require a single multi-task agent to generate training data. This makes our method more applicable since it requires less storage and time. - On evaluation. AD is supposed to only work with a single task instance at test time. This is evident in line 12 in Algorithm 1 in the AD paper. In other words, every time the environment is reset, the agent will be spawned at exactly the same initial location. Goal location and other environmental transitions also keep the same for all evaluation episodes. However, our agent is evaluated with new and different task instances every episode, meaning that the agent's initial location, goal location, and other factors change every test episode. Therefore, our test setting is **strictly harder** than AD’s. Due to the above two differences, we do not think our work is directly comparable to AD. We will still conceptually compare our learning-progress-based curriculum against AD and clarify the discussion in the camera-ready version. > Differences between Adaptive Agent Team et al. [4] We would like to highlight the differences between AdA [4] from the following perspectives. - Scope and focus. Similar to how our work compares against AD, we focus on improving the data efficiency for Transformer-based agents by explicitly formulating data in curricular sequences. We introduce and validate three solutions toward this goal. Our method also works with imitation learning settings — robotic manipulation from human demonstrations — which meta-RL literature such as AdA does not explore. - Online data generation vs. offline data generation. AdA is trained by online RL (Muesli [5], specifically), where the agent is allowed to learn from experiences through trial-and-error. However, our cross-episodic curriculum method is offline behavior cloning instead, where training data are first generated and then fixed and used for learning. - Reproducibility and open-sourceness. AdA is developed and evaluated only on XLand [6], which has not been open-sourced and accessible by the community. Their algorithm implementation is also not open-sourced. By contrast, our method is developed and validated with open-sourced environments (DMLab [7] and RoboMimic [8]). We also open-source our algorithm implementation to contribute to the community. It is easy and straightforward for the community to reproduce our results, and further develop follow-up techniques on other domains. > More discussion of the feasibility of getting curricular data Thanks for your suggestion. We discuss the feasibility of obtaining each curriculum one by one in the [global response](https://tinyurl.com/4bc2469z). We will move the discussion to the main text in the camera-ready version. **References** - [1] Liu and Abbeel, Emergent Agentic Transformer from Chain of Hindsight Experience, ICML 2023. - [2] Chen et al., Decision Transformer: Reinforcement Learning via Sequence Modeling, NeurIPS 2021. - [3] Laskin et al., In-context Reinforcement Learning with Algorithm Distillation, ICLR 2022. - [4] Adaptive Agent Team, Human-Timescale Adaptation in an Open-Ended Task Space, ICML 2023. - [5] Hessel et al., Muesli: Combining Improvements in Policy Optimization, ICML 2022. - [6] Open-Ended Learning Team, Open-Ended Learning Leads to Generally Capable Agents, 2021. - [7] Beattie et al., DeepMind Lab, 2016. - [8] Mandlekar et al., What Matters in Learning from Offline Human Demonstrations for Robot Manipulation, CoRL 2021. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Thank you for your rebuttal. You have partially addressed my concerns on the novelty of this work. However, it is still problematic that this is largely written as if the in-context learning ability of transformers has not been used in RL when there have been many notable successful applications. As such I find it difficult to raise my score above a 5 (borderline accept) without a more narrow and clear framing of the contributions relative to other methods of achieving in-context learning in RL. --- Rebuttal 2: Title: Follow-up on our response Comment: Dear reviewer, As the discussion stage is ending soon, we wonder if our response answers your questions and our extra experiments address your concerns? If yes, would you kindly consider raising the score? Thanks again for your very constructive and insightful feedback!
Rebuttal 1: Rebuttal: # Global Response We sincerely thank all reviewers for their thoughtful and constructive feedback. We really appreciate that all reviewers find our idea novel and important for Transformer-based agents. We attach updated versions of Figs 3 and 4 in the one-page PDF. In our response to each reviewer below, we address their individual questions and comments. We will update the paper and supplementary PDFs with revisions accordingly. We welcome any follow-up discussions! ## More Baseline Comparison We extend the main experiments on DMLab with two more baselines, Agentic Transformer (AT [1]) and Decision Transformer (DT [2]). AT is a **concurrent** work with the preprint version released after NeurIPS submission and conference version published at ICML in July. It is closely related to our work by training Transformers on sequences of trajectory ascending sorted according to their rewards. DT is a popular Transformer-based offline RL method that conditions on return-to-go and does not use any forms of cross-episodic curriculum. We control data, model capacity, training strategy, etc. to be the same. We denote AT and DT trained on data consisting of a mixture of task difficulties as “AT (Mixed Difficulty)” and “DT (Mixed Difficulty)”. Note that these data are the same used to train “Ours (Task Difficulty)”. Similarly, we denote AT and DT directly trained on test difficulty as “AT (Single Difficulty)” and “DT (Single Difficulty)”. These data are the same used to train “Ours (Learning Progress)”. Figures are attached in the PDF. Averaged numerical results are presented as below. *Fig 3 averaged results.* | | Ours (Task Difficulty), Auto | Ours (Task Difficulty), Fixed | Ours (Learning Progress) | DT (Mixed Difficulty) | DT (Single Difficulty) | AT (Mixed Difficulty) | AT (Single Difficulty) | BC w/ Expert Data | RL (Oracle) | Curriculum RL (Oracle) | |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | Average | 51.4 | **54.4** | 32.4 | 35.3 | 11.7 | 42.7 | 33.4 | 14.2 | 40.6 | 50.6 | *Fig 4 averaged results.* | | Ours (Task Difficulty) | Ours (Learning Progress) | DT (Mixed Difficulty) | DT (Single Difficulty) | AT (Mixed Difficulty) | AT (Single Difficulty) | BC w/ Expert Data | RL (Oracle) | Curriculum RL (Oracle) | |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | Average | **39.6** | 27.78 | 31.8 | 13.6 | 39.4 | 29.2 | 18.1 | 30.0 | 37.6 | We can see that our method with task-difficulty-based curriculum performs the best during evaluation (Fig 3, the first table above), confirming the benefit over the concurrent AT approach that leverages chain-of-hindsight experiences. When compared to DT, it outperforms by a significant margin, which suggests that our cross-episodic curriculum helps to squeeze learning signals that are useful for downstream decision-making. When evaluating the generalization ability (Fig 4, the second table above), our method performs better than the concurrent AT baseline and achieves significantly better results than other baselines. This empirically suggests that CEC helps to learn policies that are robust to environmental perturbations and can quickly generalize to new changes. ## Policy Visualization We visualize agents' behavior on the DMLab task Irreversible Path [here](https://tinyurl.com/yypwytt). The agent trained from task-difficulty-based curriculum can navigate to gradually more distant goals. Keeping the curriculum in context helps to distill useful exploration and long-horizon planning skills, which are critical to the success of the most difficult level. Evaluated on the same difficulty, the agent resulting from a learning-progress-based curriculum sometimes makes progress. The relatively inferior performance could be due to the fact that source RL agents, which are used to generate training data, also struggle in this task. DT and AT demonstrate even worse behavior. They usually choose the wrong direction and get stuck forever. ## Discussion on the Feasibility of Obtaining Curricular Data - Learning-progress-based curriculum. RL agents generally monotonically improve over time, so they naturally generate increasingly better data. Our curriculum is constructed by a series of checkpoints over the course of training. There is no additional assumption to formulate this curriculum. - Task-difficulty-based curriculum. a) For environments with parameterized difficulty, it is straightforward to create a schedule based on the difficulty parameter to formulate the curriculum as we did in this work. b) For environments where the difficulty is not parameterized, methods such as those in [3] can be used. An exciting avenue of future research would be applying our method to real-world tasks without explicitly defined difficulty. - Expertise-based curriculum. One limitation is the need to estimate the demonstrators’ proficiency. Some IL benchmarks, such as RoboMimic [4], already have proficiency labels. Despite this, to apply our method more broadly, one way to approximate proficiency is to rank trajectories by completion time. Moreover, in practice, a demonstrator’s proficiency naturally increases when collecting more data: from not being familiar with the teleoperation systems or the task to collecting data with muscle memory [5]. This progress could provide a rich learning signal using CEC. We will move this discussion to the main text in the final version. ## References - [1] Liu and Abbeel, Emergent Agentic Transformer from Chain of Hindsight Experience, ICML 2023. - [2] Chen et al., Decision Transformer: Reinforcement Learning via Sequence Modeling, NeurIPS 2021. - [3] Kanitscheider et al., Multi-task curriculum learning in a complex, visual, hard-exploration domain: Minecraft, 2021. - [4] Mandlekar et al., What Matters in Learning from Offline Human Demonstrations for Robot Manipulation, CoRL 2021. - [5] Mandlekar et al., RoboTurk: A Crowdsourcing Platform for Robotic Skill Learning through Imitation, CoRL 2018. Pdf: /pdf/13004f53e4a75fb64eae87ad585006bfddb94d78.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
RanPAC: Random Projections and Pre-trained Models for Continual Learning
Accept (poster)
Summary: This paper proposed a frozen random projection layer with nonlinear activation to exploit pre-trained representations for continual learning. Combining PETL techniques and class prototypes, the proposed method achieves strong performance in class- and domain-incremental learning. Strengths: 1. The paper is well organized and easy to follow. 2. The introduction clearly summarizes the major strategies of leveraging pre-training for continual learning. 3. The proposed method seems to be easy to implement and achieves strong performance. 4. Many recent methods on arxiv have been compared in experiments. Weaknesses: 1. The idea of the proposed method is somewhat incremental. The authors claim that they borrow some ideas from PETL methods. Actually, the Phase 1 seems to (almost) inherit the previous works (line 273-284) and constitute half of the performance improvement (Table 1, 3). 2. Meanwhile, the proposed random projection with nonlinear activation seems equivalent to a randomly-initialized MLP layer, which is a widely-used training trick for transformer backbone in NLP as well as in CV. 3. The authors claim that the extra parameter usage is small compared to the ViT-B/16 backbone (10M vs 84M). However, the compared baselines in Table 1, 3 generally require much smaller parameter usage, especially for prompt-based approaches (e.g., L2P and DualPrompt usually require ~0.03M/Task and 0.3M in total). Therefore, the comparison might be somewhat unfair. Also, how is ''10 times the number of trainable parameters'' (line 302) being calculated? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: My major concerns include the novelty and technical contributions of the proposed method, as well as the parameter usage in experiments. Please refer to Weakness. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes Flag For Ethics Review: ['No ethics review needed.']
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their time and feedback. We address the points individually and kindly ask the reviewer to increase their score based on our response. > “The authors claim that they borrow some ideas from PETL methods. Actually, the Phase 1 seems to (almost) inherit the previous works (line 273-284) and constitute half of the performance improvement (Table 1, 3).” **Response:** We have been explicit in the use of PETL as methods that our approach can benefit from. Our primary contribution is introducing random projections (RP) for continual learning (see lines 92-103, pages 1-2). Therefore, we can either choose to use PETL or not. The second-last row of Tables 1 and 3 list the performance gains due to RP added on top of PETL, and shows at least 10\% improvement in 9 out of 10 datasets, of which 4 cases are over 20\% improved. Also, the final row of Tables 1 and 3 shows the improvement when PETL is absent, and this is also over 10\% in 6 out of 10 datasets, illustrating that the method does not need PETL to be of value. > “… the proposed random projection with nonlinear activation seems equivalent to a randomly-initialized MLP layer, which is a widely-used training trick for transformer backbone in NLP as well as in CV." **Response:** Our RP layer's weights are randomly sampled from the specified distribution and *never trained*, which is very important for CL. This is different to random initialization of *trained* MLP layers. For instance, Gani *et al.* (2022) randomly initialised but then trained MLP weights appended to a pretrained NLP transformer (https://arxiv.org/abs/2210.07240). Please refer us to any specific papers that, like us, do leave random weights frozen. > "The authors claim that the extra parameter usage is small compared to the ViT-B/16 backbone (10M vs 84M). However, the compared baselines in Table 1, 3 generally require much smaller parameter usage..." **Response:** Our 10M *non-trainable* parameters are not used in Phase 1 and only ever require forward pass and are never back-propagated through during Phase 2 training, so our training is very fast. > "how is ''10 times the number of trainable parameters'' (line 302) being calculated?" **Response:** Without RP, the number of trainable parameters after $K$ classes is $LK$. When using RP, this increases to $MK$. The increase factor is $(M-L)/L$, i.e. ~10 when $M=10000$ and $L=784$ (but note that M is often effective for smaller values). --- Rebuttal 2: Title: Reviewer response Comment: Thank you for replying to my comments and questions. The rebuttal has addressed some of my concerns. However, my major concerns still remain: 1. Novelty and technical contribution remain ambiguous. I understand the randomly-initialized MLP might be implemented differently from previous work. However, the use of MLP is a common strategy to improve prompt-tuning, and has been introduced to prompt-based continual learning [1]. At the current stage, I cannot determine whether this contribution is sufficiently strong. It will be more informative to provide a more in-depth comparison of different MLP implementations, especially their potential functions for continual learning. 2. I understand the MLP is not trained. However, the storage overhead of the 10M parameters seems to be much larger than other baselines (e.g., L2P and DualPrompt), which only use 0.3M in total. This may not be a critical issue, but needs to be clarified as a potential limitation. [1] Progressive prompts: Continual learning for language models. ICLR 2023. --- Rebuttal Comment 2.1: Comment: Thankyou for your time and the additional remarks and suggestions. If you find our clarifications satisfactory, we kindly ask to consider increasing the score accordingly. > "The use of MLP is a common strategy to improve prompt-tuning, and has been introduced to prompt-based continual learning [1]. At the current stage, I cannot determine whether this contribution is sufficiently strong. It will be more informative to provide a more in-depth comparison of different MLP implementations, especially their potential functions for continual learning." **Response:** Ref [1] and the work it cites learns input prompts for pretrained models and trains MLPs (different for each task) that reparameterize them. Our approach is very different, as we do not use either prompts or MLPs; instead, as a Class Prototype strategy for CL, we are focusing on how to extract maximum discriminability from the network's output feature representations. This method does not use multiple layers or bottlenecks commonly used in MLPs. We use random projection of features to a higher dimension and show this simple approach outperforms previous state-of-the-art in continual learning for vision tasks. > "...the storage overhead of the 10M parameters seems to be much larger than other baselines (e.g., L2P and DualPrompt), which only use 0.3M in total. This may not be a critical issue, but needs to be clarified as a potential limitation." **Response:** Thank you for the suggestion. We described on lines 306-308 how theoretically the 10M weights can be represented using 32 times fewer bits, by using the bipolar distribution. Also, the RP projection size $M$ can be smaller, signficantly reducing the total weights (Table A5). We will clarify in the camera-ready as suggested.
Summary: The manuscript proposes a Continual Learning method called RanPAC, which belongs to the category of class prototype methods. They use a frozen pretrained model to extract feature vectors from the input images and non-linear random projections to project them to a higher dimensional space. During the training on the task sequence, they continuously update the gram matrix obtained from the projected representations and use that matrix to make predictions. Through mathematical proofs, they show that the distribution of the projected features in the higher dimensional space approaches an isotropic Gaussian. The experimental results in the paper show that their method is superior in accuracy to other state-of-the-art Continual Learning techniques and approaches to the upper bound performance. Strengths: The idea to investigate the role of random projections of feature vectors obtained from large pretrained models is new in Continual Learning research. The experimental evidence that random projections help in obtaining low correlation coefficients between class prototypes is surely a useful contribution. Weaknesses: I believe the paper has the following minor weaknesses: - Page 4, section 2.2, rows 145-161: The paper presents the LDA formula and subsequently mentions that the authors employed the Gram matrix. Nevertheless, it was not clear to me the motivation behind this choice. - Page 5, figure 2: I didn't quite understand the plots in Figure 2, left. They are introduced in Section 2, and it is explained that they represent the similarities of class prototypes. However, the meaning of "true class" and "inter-class" was not very clear to me. - Page 5-6, sections 3.2 and 3.3: I found the mathematical parts hard to understand. I think it would be more beneficial to provide simplified explanations of the equations, using simpler language to clarify their meaning. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The contribution of the work and the results show are valid. However, as I said in the previous sections, I believe that some parts of the paper could be improved and explained better. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their time and feedback. We address the points individually and kindly ask the reviewer to increase their score based on our response. > "The paper presents the LDA formula and subsequently mentions that the authors employed the Gram matrix. Nevertheless, it was not clear to me the motivation behind this choice." **Response:** This is explained in detail in the Appendix, Sec.B.4, where 3 reasons are listed. In the final version we will clarify by inserting a summary in the Approach section. > "...the meaning of "true class" and "inter-class" was not very clear to me." **Response:** “True class” refers to the histogram of cosine similarities between a sample’s feature vector and the class prototype for the class label corresponding to that sample. “Inter-class” refers to the histogram of cosine similarities between a sample’s feature vector and the class prototypes for the set of N-1 classes not equal to the sample’s class label. We will clarify in the camera-ready. > "I found the mathematical parts hard to understand. I think it would be more beneficial to provide simplified explanations of the equations, using simpler language to clarify their meaning." **Response:** Appendix B (Sections B1-B4), expand on the details of all the mathematical concepts. We will add clarifying remarks to enhance readability in the camera-ready version. --- Rebuttal Comment 1.1: Title: Rebuttal response Comment: I thank the authors for the rebuttal. After consideration, I will maintain my previous score of Accept for the manuscript.
Summary: The authors propose a method for replay-free continual learning from a pre-trained model based on random projections and prototypes. The method has a high parameter cost compared to SOTA prompting-based methods, but also is a unique method which strongly outperforms these SOTA methods. Overall, the experiments and analysis on class-incremental and domain-incremental learning provide a strong motivation for the proposed approach. Strengths: 1) I enjoyed the writing style of this paper. It was clear and scientific. 2) I appreciate the authors thinking outside of the box. Rather than another prompting method, they propose a clever new direction with high motivation and experimental justification. The method has good intuition and motivation for the problem setting. 3) The experiment section is very comprehensive and mostly satisfying. 4) I explored the SM and am very happy with all of the additional details and experiments provided by the authors. Weaknesses: 1) There seems to be large number of additional trainable parameters. While for 10 tasks, this is only 10/84 of the model size, it could double the necessary parameters for a longer task sequence. 2) Speaking of task sequences, I would like to see how the method performs for longer task sequences. A simple and easy experiment would be 20 task ImageNet-R. 3) Computation time analysis (training and inference) seems to be missing. Technical Quality: 3 good Clarity: 3 good Questions for Authors: a) How does the method perform on longer task sequences? For example, 20-tasks of ImageNet-R? There is one table in the SM with 20 tasks, but I would rather see something comparing to the other methods from Table 1. b) Thank you for being transparent in your experiment section about the parameter costs of your method. How does the additional parameter costs compare to other methods? c) How does computation costs compare (both training time costs and inference time costs) to other methods? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Reasonable discussion on limitations is included. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their time and feedback. We address the points individually and kindly ask the reviewer to increase their score based on our response. > “There seems to be large number of additional trainable parameters. While for 10 tasks, this is only 10/84 of the model size, it could double the necessary parameters for a longer task sequence.” **Response:** The mentioned 10/84 relates to the $LM$ untrained RP weights. This does not change as the number of tasks or classes grows. For DIL (Table 3), there is no parameter growth with tasks. For CIL, the number of parameters in the output head will double for any method (not just ours), if the number of classes doubles. Our weights per class is larger due to expansion from $L$ to $M$, but when using RP with $M=10000$, we would need ~4200 classes to reach half the ViT-B/16 size. Smaller $M$ can suffice, however, as indicated in Table SM5. > **Question a):** “How does the method perform on longer task sequences? For example, 20-tasks of ImageNet-R? There is one table in the SM with 20 tasks, but I would rather see something comparing to the other methods from Table 1” **Response:** We agree a comparison of $T=20$ for more methods will be valuable. The following table compares our approach to results extracted from ref [51] and will be added into Table SM4: | Method | CIFAR100 | IN-R | IN-A | CUB | OB | | ---------- | -------- | ----- | ----- | ----- | ----- | | Ours | **90.8** | **75.4** | **58.9** | **89.7** | **79.4** | | L2P | 70.96 | 56.25 | 40.71 | 58.23 | 60.19 | | DualPrompt | 72.04 | 69.25 | 40.95 | 67.46 | 64.39 | | ADaM | 89.67 | 70.47 | 51.48 | 86.7 | 73.53 | > **Question b):** “How does the additional parameter costs compare to other methods?” **Response:** In comparison methods, L2P, DualPrompt and ADaM each use ~0.3M-0.5M parameters, while CodaPrompt uses ~4M parameters. SLCA trains all 84M ViT parameters. In comparison, for $K=200$ classes and $M=10000$ we use between 2M and 2.5M trainable parameters (depending on the PETL method), and 10M untrained parameters. We need to highlight that the random projections are not trainable. Therefore, the training overhead compared to other approaches is the difference in the size of the projection space from $L$ to $M$. We believe this increase in dimensionality is a trade-off for simplicity of implementation and low-cost training (e.g. the extracted features can be cached). > **Question c):** “How does computation costs compare (both training time costs and inference time costs) to other methods?" **Response:** Inference speed is negligibly different to the speed of the original pretrained network, because both the RP layer and the final linear head are implemented as simple fully-connected layers on top of the underlying network. For training, Phase 1 trains PETL parameters using SGD for 20 epochs, on ($1/T$)'th of the training set, so is much faster than joint training. Phase 2 is generally only slightly slower than running all training data through the network in inference mode, because the backbone is frozen. The slowest part is the inversions of the Gram matrix, during selection of $\lambda$, but even for $M=10000$ this is in the order of 1 minute per task on a CPU, which can be easily sped up. In general, we believe the efficiency and simplicity of our approach compared to the alternatives is very strong. We will add remarks on this to the final version. --- Rebuttal Comment 1.1: Title: I am satisfied with the rebuttal response Comment: I thank the reviewers for their rebuttal response and raise my rating to accept. Thank you for the hard work.
Summary: The paper investigates the issue of continual learning using frozen pretrained vision transformers. The authors conduct a thorough analysis of potential limitations and strengths of continual learning methods that utilize pretrained models, supported by theoretical studies and derivations. Additionally, they introduce a novel algorithm called RANPAC, which incorporates a random projection layer with a nonlinear activation function, combined with a Parameter-Efficient Transfer Learning method. To assess the efficacy of their proposed solution, the authors conduct extensive experiments on various datasets such as Cifar and ImageNet, and across different scenarios including Class-Incremental Learning and Domain Incremental Learning. Strengths: The authors have successfully addressed the problem at hand and conducted a comprehensive set of experiments to assess the effectiveness of their proposed solution. The motivation behind their research is evident, and the current trend of expanding foundation models to the continual learning (CL) field makes their work particularly relevant. Both the introduction and related works sections are clear and well-organized, providing a solid foundation for the study. The method description is well-supported by theoretical derivations, which are also included in the supplementary material for further clarity. The presence of an ablation study to support the obtained results is highly commendable, as it enhances the reliability of their findings. Additionally, the authors' submission of code for transparency and reproducibility is greatly appreciated, further validating the credibility of their research. Weaknesses: The paper suffers from issues with clarity and organization. The current structure makes it challenging to follow as the authors have mixed background information with the method, resulting in an unclear narration. To improve the paper's flow, the authors should consider moving the "Overview and Intuition" subsection to the beginning of the method section. This would provide readers with a better understanding of the approach before delving into the technical details. While the topic covered is of interest to the CL community, the contribution of the paper appears to be limited. Although the proposed algorithm is supported by derivations and theoretical foundations, it seems like a combination of existing tricks in CL to mitigate the shift towards new classes. As a result, the novelty of the approach is limited. Regarding Equation 1, the usage of the Gram matrix updated over time, while effective, does not appear to be a strong innovation compared to techniques like LDA. Table 1 raises concerns about the upper bound, particularly the performances of the joint linear probe. The results are surprising and not entirely convincing. The message of figure 2, is not clear and easy to get. The left part of the figure is not clarified in the text. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - The lack of an analysis of different PETL methods is a notable gap in the paper. Since Algorithm 1 is designed to work with any PETL method, it would have been interesting to observe and compare the performances of different PETL solutions. - In section 3.2, the authors discuss the impact of RP, M. However, they fail to demonstrate the impact on performances adequately. It would have been beneficial to compare models with the same phi but varying M from 1 to a significantly larger dimensionality (ideally infinite from the claim of the authors). - The authors claim that M=2000 is a good choice in section 3.2, but during experimentation, they raised it to 10000. The reason for this discrepancy should be addressed to provide clarity on the choice of M and its impact on the results. - To further assess the effectiveness of the proposed solution, it would be valuable to study larger first task scenarios, such as 50-10, or more challenging scenarios like 50-5/50-2. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors analyzed the limitations of their method in few lines. before concluding the paper. This section should be definitely expanded. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their time and feedback. We address the points individually and kindly ask the reviewer to increase their score based on our response. > “...the authors have mixed background information with the method ... should consider moving the "Overview and Intuition" subsection to the beginning of the method section.” **Response:** We agree with the recommendation and in our final version will introduce an explicit separate Background section, and move material into it. > “... it seems like a combination of existing tricks in CL to mitigate the shift towards new classes. As a result, the novelty of the approach is limited.” **Response:** We respectfully disagree. The novelty and primary contribution is the introduction to pretrained models of frozen untrained RP (random projection layer to dimension $M\gg L$) with nonlinear activation *which has never been done in the continual learning context*. This is *not* an existing trick in CL with deep neural networks. For the CL community, frozen RP weights is a fresh strategy that may inspire its use within other CL methodologies, because forgetting cannot happen in those weights. > “…the usage of the Gram matrix updated over time, while effective, does not appear to be a strong innovation...” **Response:** We agree, which is why Pages 1-2 (lines 91--103) do not list this as a methodological contribution. Three reasons for using the Gram matrix are explained in the Appendix, Sec.B.4. The choice helps our conceptual contributions illustrated in Fig.2. Moreover, it enables the links to theory in Sec B.3 and B.4.4, that lead to our interpretation of the method as decorrelating classprototypes (which is absent in NCM) and our use of ridge regression (important for our empirical results). > Surprising upper bound results in Table 1. **Response:** The "upper bound" for continual learning performance is *joint fine-tuning*. Our results for joint fine tuning are in Table 2. In Table 1, we report only joint linear probe results (with frozen backbone), and these are not an "upper bound." We will clarify this when we mention upper bound on line 97, and within the Table captions. > “The message of figure 2, is not clear and easy to get. The left part of the figure is not clarified in the text.” **Response:** Fig.2 shows that our method reduces correlations between the class-prototypes of different classes (right side), thus creating better class separability (left side). We will add this to the caption. The left side of Fig.2 is explained on Page 4, lines 175-182, which is the next paragraph after the right side was explained. We will join these paras together. > **Question 1:** “...compare the performances of different PETL solutions..." **Response:** Analysis of the 3 PETL methods is provided for 7 datasets in the Appendix, Sec.F.6 (lines 766-771 for analysis) and Fig.SM7 shows comprehensive comparisions. We followed ref [51] for PETL. Although our overall methods are different to [51], like them, we found that the best performing PETL method is very dependent on the dataset (and pretrained weights). E.g. we found for CIFAR100 and $T=10$ that AdaptMLP gives better results than SSF or VPT, for Cars VPT is best, and for ImageNet-A, SSF is best (lines 766-771). > **Question 2:** “It would have been beneficial to compare models with the same phi but varying M from 1 to a significantly larger dimensionality....” **Response:** We explore the scaling with $M$ in the Appendix, Sec.F.5. Table 5 shows results for split CIFAR100 for $M$ increasing from $100$ to $15000$ for the same $\phi$. The table shows diminishing returns as M goes past ~$5000$. Also, Fig 3 (left) shows data for $M=5000$, $10000$, and $15000$ with the same $\phi$. > **Question 3:** “The authors claim that M=2000 is a good choice in section 3.2, but during experimentation, they raised it to 10000. The reason for this discrepancy should be addressed to provide clarity on the choice of M and its impact on the results” **Response:** In Sec.3.2, we arbitrarily chose $M=2000$ solely for the purpose of illustration in Fig.2. We made no claim about this choice being “good”. For consistency, we will update Fig.2 to use $M=10000$, but there are no perceptible differences in this visualization. > **Question 4:** “...it would be valuable to study larger first task scenarios, such as 50-10, or more challenging scenarios like 50-5/50-2.” **Response:** In the Appendix, Sec.F.3, we compare $T=5,~T=10$ and $T=20$. The $T=5$ scenario produces a larger first task. Table 4 shows that performance is better when the first task is larger. The reason is that Phase 1 adapts the pretrained model to a larger chunk of the overall dataset. Also note the column “No PETL” in Table 4 that stands alone without a choice of $T$. This is because Phase 2 has no dependency on the size of task increments. We mention this on lines 242--243, and with further details in Sec.F.3, i.e. in Phase 2, final accuracy is invariant to the order in which a completed set of data from all $T$ tasks is used. This property follows from Eqn (3), the use of frozen untrained RP weights, and the use of class-prototypes. This combination enables one class at a time to be added, with prediction outputs for that class unaffected by the subsequent addition of more classes. > “The authors analyzed the limitations of their method in few lines. before concluding the paper. This section should be definitely expanded.” **Author Response:** Thanks for the comments. We will expand the discussion of limitations in the camera-ready version. --- Rebuttal Comment 1.1: Title: Reviewer Response Comment: Thanks to the authors for the detailed answer. However, I still have some doubt about novelty and the effectiveness of the method. - Even if the RP with M>>L is claimed as a huge contribution, I have still some concerns about the intuition and if this is sufficient as it is. - In SM section F3, the dimension and the number of classes in the first task is not specified. - The motivation behind section SM F4 is not clear, why do the authors present the result for Task Agnostic Continual Learning? Task Agnostic Continual Learning and Class Incremental Learning should be the same from my point of view. --- Reply to Comment 1.1.1: Comment: Thankyou for your time and the additional remarks and suggestions. If you find our clarifications satisfactory, we kindly ask to consider increasing the score accordingly. > "... the effectiveness of the method." **Response:** The strong effectiveness of our method is evidenced by the second-last row of Tables 1 and 3. Our results show that introducing RP leads to at least 10\% reduced error rate in 9 out of 10 datasets, e.g. CUB, Core50 and DomainNet has 26%, 32% and 27% improvement respectively. > "Even if the RP with M>>L is claimed as a huge contribution, I have still some concerns about the intuition and if this is sufficient as it is." **Response:** Our underlying intuition is discussed in Section 3.3 (especially Lines 258-271, page 6, and Figure 3). To summarize, we observed that Class Prototypes (CP) formed from the pretrained model's embeddings can be made more linearly separable if interactions between features are utilized. Since training a layer of weights where we insert RP is not feasible for CL (as it results in catastrophic forgetting), we instead propose RP followed by nonlinear-activation in order to create a transformed set of features that is more linearly separable using a CP method than features directly extracted from a pretrained model, as confirmed in Figs 2 and 3. As an additional contribution, we also show, for the first time, why CP strategies for CL benefit from decorrelation using second-order statistics, i.e. compared to simple NCM, similarities between CPs and comparison embeddings become better calibrated, resulting in enhanced linearly separability. We also show that our approach has high generality, with strong enhanced performance on 10 datasets, three scenarios (CIL, DIL and task-free), and for both transformers and CNNs (see Results). > "In SM section F3, the dimension and the number of classes in the first task is not specified. " **Response:** In our rebuttal when we described having a larger first task with $T=5$, we meant this relative to the $T=10$ case, not that the first task had more classes than the other tasks. In all CIL experiments, all tasks (including the first) have identical sizes for the same $T$. E.g. if $K=200$ and $T=10$ then all $10$ tasks have 20 classes. But if $T=5$ then all tasks have 40 classes. So our point was that with Phase 1 adaptation, having 40 classes in the first task for $T=5$ explains the stronger results overall than for $T>5$. > "The motivation behind section SM F4 is not clear, why do the authors present the result for Task Agnostic Continual Learning? Task Agnostic Continual Learning and Class Incremental Learning should be the same from my point of view. " **Response:** We use an identical "task agnostic" scenario to that introduced in ref [56]. Others in the literature have used the term "task free" instead (Shanahan et al, "Encoders and Ensembles for Task-Free Continual Learning", 2021) and we will clarify this in the camera ready. In summary, in the scenario of [56] and our Section F4, the set of classes available during *training* changes randomly with no way to define task boundaries. In contast, for CIL, although *inference* is task agnostic, *training* is applied to disjoint sets of classes, described as tasks.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Tree Variational Autoencoders
Accept (spotlight)
Summary: This paper introduces a new class of variational autoencoders that define a binary tree structure. Given a fixed tree, the generative model is a top-down hierarchical model with binary routing components and the inference network follows a top-down approach in the style of the Ladder VAE. The authors propose to grow the tree iteratively every N training steps. They also suggest using `NT-Xent` to force aligning the learned representation with human perception. Based on binary image, natural image and a text dataset, the authors demonstrate the effectiveness of the method based on modelling (likelihood) and clustering performances. Examples of learned trees are presented in the paper and are convincing: coupled with `NT-Xent`, Tree VAE learns convincing data hierarchies without explicit supervision. Strengths: A strong, creative and well-written paper! 1. This paper makes a significant contribution towards learning meaningful hierarchical models, which is an important problem in the field 2. The solution described in this paper is creative and well presented (structured presentation: generative model vs. inference network, detailed derivations, intuitive notation) 3. The experiments are extensive (5 datasets, 4 baselines, 10 seeds for each run) 4. The visual results are very convincing 5. In-depth appendix Weaknesses: 1. Section 2.6 is unclear as such. I suggest adding the equation corresponding to the auxiliary loss term that comes with the use of `NT-Xent` 2. The high quality of the results might be tightly tied to the use of `NT-Xent` . According to appendix E.3, a lot of work is required to get this right. I think this work would greatly benefit from an ablation study targeted at the use of choice of methods for integrating prior knowledge (e.g., without `NT-Xent`, with another method) Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. What is the modelling performance (likelihood) of Tree VAE without `NT-Xent`? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: Limitations of the method are not sufficiently discussed. The authors might want to discuss some of the following points more thoroughly 1. the multi-stage training of TreeVAE adds complexity, and result might vary depending on the choice of expansion criterion 2. the results depend of the choice of "prior knowledge" model Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and feedback! We appreciate your support for this paper. W: We will make sure to extend Section 2.6 for the camera-ready version. We intentionally kept it short, as we only use `NT-Xent` for real-world image datasets, that is CIFAR-10, CIFAR-100, and CelebA. Here, the reconstruction loss is not sufficient for guiding the tree toward a meaningful clustering, which is why for those datasets, we inject inductive biases through contrastive learning. As such, the results of MNIST, FMNIST, 20Newsgroup, and Omniglot are solely tied to the maximization of the ELBO and the presented generative modeling performance of TreeVAE is without the `NT-Xent`. For the choice of method for integrating prior knowledge, we have experimented with multiple options. They are all inspired by the fact that two augmented versions of the same input should behave similarly. Please note that the following results were obtained on a curated subset of CIFAR-10 as this ablation was performed during the model development stage. Thus, the absolute values are not directly comparable to the values in our paper, however, the relative differences in methods remained comparable over different experiments, so we believe, in order to save computational resources, that these results are still meaningful. |Method|NMI| |-|-| |Ours (=A+B)|$44.1 \pm 0.7$| |(A)|$21.7 \pm 4.0$| |(B)|$6.0 \pm 6.0$| |(C)|$23.0 \pm 1.2$| |(D)|$0.2 \pm 0.0$| |(E)|$1.0 \pm 1.3$| Here, (A) corresponds to the `NT-Xent` regularization on the routers, (B) to the `NT-Xent` regularization on the bottom-up embeddings, (C) on a transposed version of the `NT-Xent` regularization on the routers, following Li et al. [2], (D) to the `NT-Xent` regularization of only the output of the encoder, and (E) to minimizing the Jensen-Shannon divergence between the router probabilities of the two augmented inputs. Further experiments showed that the combination of (A) and (B) leads to the best results. Q: We hope that we were able to answer your question in the previous section noting that the modelling performance in the submission is already without `NT-Xent`. L1: Lastly, regarding the multi-stage training, we agree with the reviewer and leave the investigation of a more principled approach regarding the expansion criterion to future work. We will make sure to include this in the limitations section of the camera-ready version. Additionally, we would also like to refer to our answer to Reviewer UPHe, where we show that given enough computational resources, the computational complexity is the same as for an iteratively grown LadderVAE. L2: Discussed in the weaknesses section. If you have any additional questions, comments, or feedback, we will be happy to address them.
Summary: The paper presents a novel method of unsupervised hierarchical clustering by encoding structural sequential dependencies between hidden variables within the framework of variational auto-encoders. The authors adopt similar designs of top-down and bottom-up dependency structure to Ladder VAE, but imposes a binary tree or decision tree based structure to prior and posterior distribution of latent variables instead of fixed chain structure adopted by Ladder VAE, to discover hierarchical and well-organized dependencies among encoded representations. In the proposed decision tree structure, all hidden variables are organized in a top-down architecture with a nested set of binary gating/decision variables controlling the direction of every move, and altogether optimized with a unified objective function derived through variational inference. To make the binary tree structure flexible and learnable, a sequential tree growth procedure is also developed. The paper compares TreeVAE with both non-hierarchical and hierarchical baselines, and it shows superior clustering performance and achieves competitive log-likelihood lower bound. Strengths: 1) The paper is well written, and is easy to follow. Informative figures and detailed formulae are given to explain how the model is built, which is very helpful. 2) Although the perspective of modeling hierarchical clustering as a binary decision problem is not novel, the idea of extending Ladder VAE to the hierarchical clustering task is interesting and looks natural. 3) Quite a few illustrative examples are introduced to validate the meaningful hierarchical structures learned by TreeVAE (e.g., Figure 1 and Figure 4), which makes the paper easier to follow and the results more convincing. 4) Extensive experiments are conducted on multiple datasets (e.g., MNIST, Fashion, 20Newsgroups, Omniglot, CIFAR-10/100, CelebA) and multiple tasks (e.g., hierarchical clustering, sample generation and hierarchy discovery) and with quantitative results compared with state-of-the-art baseline methods. Weaknesses: 1) The paper provide comprehensive experimental results, which is good. But The qualitative results are not explained very clearly, prompting people to doubt if the proposed method can really generate proper hierarchies that is consistent with the dataset’s intrinsic structure. 2) Some important ablation studies are missing, such as data augmentation and the pre-determined maximum depth of the binary tree. 3) For a generative model, sample generation should be an important part of the evaluation, but the implementation and experimental results are unclear in the context and mostly postponed to the appendix, making it quite confusing. 4) Since the tree growth procedure is proceeded in an iterative and sequential way, the corresponding training efficiency is doubtful. And even so, before the procedure we still have to set up the maximum depth of the tree or the number of leaves depending on specific dataset. 5) Data augmentation is a relatively weak form of prior knowledge especially for the hierarchical clustering task, it would be appreciated to incorporate other easily accessible forms if possible. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) The pre-defined maximum depth of the binary tree or number of total leaves should be a major factor for hierarchical methods. So it would be more convincing to take different choices of this factor into account and make comparisons accordingly. 2) Could the authors explain how the labeled hierarchical structure shown in Figure 5 is obtained? If it is an empirical summary, the results will be not convincing enough since there might exist mismatch between ground-truth labels and the clustering hierarchies learned by the model, with clustering accuracies less than 50 percent as shown in Table 1. 3) The results of Figure 6 and Table 3 are really confusing and vaguely explained. What does the term ‘leaf-frequency’ mean and what can be concluded from the results? Could the authors explain it in a clearer way? 4) Since the binary tree structure is learned in a sequential way, would it be better to term it a ‘suboptimal’ or ‘locally optimal’ tree instead of ‘optimal’ tree adopted in the paper? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: see above Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer NuUb, thank you for your feedback and constructive criticism. W1: Due to the space limitations, we have refrained from explaining the qualitative results in greater detail, however, we will make sure to include this in the camera-ready version. There are also additional figures in Appendix C, which we described in more detail. We also want to point toward Fig. 1 of the additional PDF combined with our response to reviewer bo43, where we perform an analysis of the root embedding of FMNIST. We show that the in the learned embeddings, bags are embedded in between shoes and tops, while trousers are an entirely separate category for themselves. Additional evidence is presented in the 20Newsgroup dataset, which we talk about in your Q2 below. W2: Additionally to the ablation studies in Appendix C, We have added an extensive ablation study in Table 1 of the additional PDF, as well as presented a comparison of different contrastive approaches in our response to reviewer 822K. We want to emphasize that without data augmentation, the clustering performance for the CIFAR datasets would lower greatly, as the model clusters according to background information, which usually influences reconstruction loss more than the foreground information. W3: Thank you for this comment. As we are solving two tasks in one (those being generation and hierarchical clustering), we decided to emphasize the discovery of hierarchies in the main text. We will make sure to move selected generative figures from the Appendix into the main text for the camera-ready version. W4: We have experimented with multiple growing procedures and the iterative way has proven to be the most stable one, see Fig. 2 of the additional PDF for a visualization of the training ELBO behavior on FMNIST. By iteratively growing, we divide the complex problem into simpler subproblems, which is inspired by agglomerative clustering. Additionally, with enough computational resources, one could grow all nodes of the same depth simultaneously, as the resulting subtrees are conditionally independent, which would further speed up training. Note also that during training of the subtree, the weights of the rest of the tree are frozen (see Fig. 3 (right) of the main paper), which improves training efficiency. Regarding the stopping criterion of the clustering procedure, we want to mention that it is common to set the number of clusters $K$ a priori. Nevertheless, we agree that this is unsatisfactory and will focus future efforts on finding smarter ways in going about this issue, as well as mention it in the Limitations section of the final version. W5: A goal of our method is to perform hierarchical clustering with as little inductive biases as possible, in order to be truly unsupervised and also stay true to maximizing the evidence lower bound. Therefore, we only add additional supervision in the weak form of data augmentation to guide the model towards desirable splits, when it is required, which is the case for the CIFAR datasets and CelebA. Naturally, future extensions can be made where stronger inductive biases can be leveraged in order to guide and support the clustering procedure, however, this is not the focus of our current work, as we want to put focus on the fact that the model itself already works very well. This is, we propose a theoretically founded model class, where the contrastive extension shows that combining it with forms of supervision is possible. Q1: We refer to Table 1 of the additional PDF, where we have conducted an ablation analysis for different pre-defined maximum depths of the tree while keeping the maximum number of leaves $K$ constant. Additionally, the metrics Dendrogram Purity (DP) and Leaf Purity (LP) in the main paper allow unbiased evaluations for dendrograms where the number of learned clusters different from the ground-truth number, which is why we compute these two metrics with trees that contain $20$ leaves. Lastly, we also want to mention the further experiments in Appendix C, where we depict trees that are grown to more than $10$ clusters, as well as quantitative results for varying the number of ground-truth clusters, as well as the number of learned clusters. Q2: 20Newsgroup consists of $20$ classes that also have a hierarchical structure. The labels depicted in the figure are the $20$ ground-truth labels of the dataset, where we visualize the majority class of the samples falling into each leaf. The words in each class label indicate the group assignments on each hierarchical level, where each level is separated by a dot. For example, "comp.graphics" indicates a first hierarchical layer of "computer" and a second layer of "graphics". To be clear, we did not choose the hierarchical level of each class in Fig. 5; it is predetermined by the creators of the dataset. Our experiments show that TreeVAE discovers the correct hierarchies, which supports the claim that we uncover meaningful hierarchies. Q3: The CelebA experiments are performed to give an alternative experiment to simply recovering already known labels. Here, we let the model grow and try to interpret the results in a more explorative fashion. We will make sure to put more information in the camera-ready version. Every number $v_{a,k}$ in Table 3 indicates for a given attribute $a$ and a given leaf $k$ the percentage of all samples that fall in $k$, in which attribute $a$ is present. Thus, we observe that people with blonde hair mostly fall into cluster $8$ (given the knowledge that the number of samples is evenly distributed). Q4: Thank you for the suggestion, we agree with the pointed-out issue and will adopt "locally optimal" in the camera-ready version. We hope our general response and the response to your review address the established weaknesses and questions. We are happy to address any additional feedback or questions that might arise.
Summary: introduces a new generative hierarchical clustering model called Tree Variational Autoencoders (TreeVAE) that uncovers hidden structure in data by adapting its architecture to discover the optimal tree for encoding dependencies between latent variables. The authors compare TreeVAE to other generative models and demonstrate its effectiveness in uncovering underlying clusters and hierarchical relations in real-world datasets. They also discuss the advantages and disadvantages of using TreeVAE, such as its ability to provide a tighter lower bound of the log-likelihood at the expense of a larger neural network architecture and an increase in the number of parameters. Overall, this paper presents an innovative approach to generative modeling that has the potential to improve our understanding of complex data structures. Strengths: 1. The quality of exposition is commendable. The paper presents a clear narrative, beginning with the intuitive premise, followed by a comprehensive description of the model, including a highly useful algorithmic representation. The logical sequencing of these sections facilitates reader comprehension. The authors have succeeded in presenting a complex topic in an accessible and cogent manner. 2. The innovation presented through the proposition of a tree structure in the latent space of a Variational Autoencoder (VAE) constitutes a significant contribution to the field. This unique concept not only manifests novelty but also offers functional value as it endows the model with enhanced performance capabilities. The authors have adeptly demonstrated how this structural adjustment can add a new dimension to the potential applications of VAEs and its progenies. 3. robust set of qualitative and quantitative evaluations are delivered in the paper. The authors have not shied away from providing extensive empirical evidence to assert the superiority of their model. The data presented is both extensive and convincing, demonstrating the improved performance of their model compared to common benchmark models. The evaluative components of the paper are meticulously presented, reflecting the comprehensive and rigorous experimental protocols the authors have employed. The authors' due diligence in this aspect lends credibility to their claims and effectively underlines the practical implications of their research. Weaknesses: 1. The authors have stated that the clusters learned by their model possess semantic meaning. However, it is prudent to query if this semantic characterization emerges from explicit feedback or training, or if it is simply a by-product of the model's intrinsic clustering tendencies. For instance, in Figure (4), items such as pants are grouped alongside shoes and bags, while being distinct from tops. This calls into question the semantic integrity of the learned representation. Hence, it would be advantageous for the authors to conduct an additional layer of embedding analysis, in order to ascertain the proximity of these items beyond mere qualitative observation. Such an endeavor would bolster the readers' understanding and lend more weight to the claim that the cluster learned is indeed semantic. 2. he paper could benefit from a more extensive ablation study that investigates the effect of different components and hyperparameters on the model's performance. For instance, an intriguing area of exploration could be the impact of modifying the number of layers in the tree. It would be worthwhile to examine if augmenting this parameter might encourage a more detailed clustering. A comprehensive study of this nature could elucidate the relative contributions of each component, making clear which are indispensable to the model's performance, and which are more auxiliary in nature. By adding more depth to the discussion of how these elements influence the final result, the authors would be equipping their audience with a more nuanced understanding of their model's operation. These enhancements would provide further depth and rigor to the paper, effectively strengthening the authors' claims and helping readers to more fully grasp the novelty and impact of the presented research. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the above sections for detailed discussion. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, there is a section at the end of the paper discussing limitations and future research opportunities. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer bo43, Thank you for your comments and your thorough reading of the paper. W1: The idea behind the claim of semantic meaningfulness is that samples of the same cluster should have a similar latent representation and also that clusters that are close to each other in the hierarchical structure should have more similar latent representations than clusters far apart. This is integrated in the model design, as the learning of the decision boundary is mainly guided through the weighted reconstruction loss. That is, the split in every node is optimized such that the two decoders can specialize themselves as well as possible on the subgroup of data, and therefore the subspace of representations, they observe. Thus, the routers try to find splits that partition the data into two subgroups that are as distinct as possible while being as similar as possible within, which coincides with partitioning the latent space at every node into two semantically different subspaces. With this objective, our model should be able to recover semantically meaningful clusters through the training procedure. A direct way of checking this is by exploring the learned tree of the 20Newsgroup dataset in Fig. 5 of the main work. For each leaf, we denoted the majority class that falls into it. Note that the name of each class entails multiple hierarchical layers. As can be seen, the learned tree recovers the subgroups of different hierarchical levels very well, for example, all computer-related topics are grouped in one subtree. Thus, the human-assigned semantic structure corresponds to the unsupervisedly learned semantic structure. Furthermore, we have conducted an additional layer of analysis for the embeddings of FMNIST. Firstly, we have applied UMAP to reduce the root embedding to 2 dimensions and visualize them in Fig. 1 of the additional PDF. As can be seen, tops as well as shoes are clustered together, while bags are somewhat in between. Trousers on the other hand are completely separate. Our interpretation is that the root split is made between shoes and tops, where it is unclear in which subtree bags and trousers should fall as they are both not clearly assigned to one of the two groups. Therefore, depending on initialization, they might end up on either side. To analyze the effect of random weight initialization on the learned dendrogram, we have made further efforts to find a dendrogram that best summarizes the learned dendrogram of multiple runs. For this, we first align the leaves of the different trees by maximizing the overlapping samples, then we store the number of edges between any two clusters for every tree and then average this number over all trees. Thus, we have computed a distance matrix of all clusters, which is averaged over all trees. We then used average and complete linkage to cluster according to the average distance matrix. The recovered dendrograms of the two algorithms are identical, apart from the Bag and the Trouser cluster, which switch places. That is, all tops are in one subtree and all shoes in the other, while bags and trousers are assigned to either the shoes or tops subtree, depending on the clustering algorithm used. What this indicates is that the two clear groups of shoes and tops are consistently recovered, no matter the random initialization, while the assignment of bags and trousers varies, which is to be expected, as there is no clear right choice. W2: We agree that analyzing the effects of different components is important and have thus conducted an extensive ablation study presented in Table 1 of the additional PDF. Additionally, we would like to mention the further experiments performed in App. C, where for example we let the tree grow deeper, which lead to the discovery of different subgroups within the same class, for example, the differentiation of straight and tilted $9$'s or dark and light T-shirts. We hope our general response and the response to your review address the established weaknesses. We are happy to address any additional feedback or questions that might arise. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. It answered all of the critical parts of my concerns. My rating would stay at "accept".
Summary: This paper introduces a new deep generative model, the tree variational autoencoder, designed to discover latent hierarchical clusters in data. The generative model makes a number of latent binary choices over a pre-learned tree structure, sampling a continuous representation at each node, then finally decoding the observed data point at the particular leaf of the binary tree that is reached. A scheme for performing variational inference on the model and a heuristic for learning the tree structure are given, and results presented for hierarchical clustering on common small image and text datasets. Strengths: Simple but novel technical idea for model and training, breaking it down into a number of simpler learning problems. Interesting qualitative results in terms of what hierarchy of clusters are discovered and strong qualitative results compared to baselines. Related work includes all relevant that I am aware of. Weaknesses: The datasets used seem quite small/simple in this day and age. I think it would be interesting to perform hierarchical clustering on the embeddings from a foundation model like Stable Diffusion on a much larger dataset. I think this may be what is being suggested in Line 318 in “Limitations & Future Work”. I would upgrade my score to Strong Accept if these results were present and favorable. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Perhaps I’m missing something simple here but how is learning performed over the Bernoulli routers after the tree structure has been learned and all weights unfrozen? During the tree formation each learning problem has a single Bernoulli variable that is summed out but if the weights are unfrozen then aren’t there a combinatorial number of values to the Bernoulli variables? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Acknowledgments of a few limitations on Pg 9. I suspect there are specific implementation details where an alternative choice would make learning fail. It would strengthen the paper to have an ablation study where the sensitivity to model and training design choices are tested (i.e. going beyond the experiments in C.3). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and your thorough reading of the paper. W1: We agree with this statement, as the purpose of this work is to introduce a new model class that jointly learns generation as well as hierarchical clustering. We are currently working on improving the model architecture, similar to how NVAE [3] improves upon LadderVAE, in order to reach SOTA performance on SOTA benchmarks, however, these results will not be available by the end of the rebuttal period and might justify a separate paper by themselves. Q1: During the training of the full tree, we sum over all $K$ paths to the leaves. Each term in the sum only depends on the realization within the path, that is, we can ignore all Bernoulli variables that do not lie in the path. Furthermore, for paths that share certain edges and nodes (think of two adjacent leaves), we only have to compute the intermediate values once, as they are shared between the paths. This reduces the computational complexity to being linear in the number of nodes. L1: We agree and refer to Table 1 of the additional PDF. If you have any additional questions, comments, or feedback, we will be happy to address them.
Rebuttal 1: Rebuttal: Dear reviewers, We deeply appreciate your insightful questions, constructive comments and helpful feedback! Your reviews suggest that you invested a considerable amount of time and effort into understanding our work, for which we are very grateful and thankful. We provide individualized responses to your Weaknesses (W), Questions (Q), and Limitations (L) in separate sections below and hope that our responses address the raised concerns. Nevertheless, we would like to mention a few points here that might be interesting to everybody: In the additional PDF, we present an extensive ablation study in Table 1. We have decided a priori to run the rebuttal experiments on Fashion MNIST, as our clustering performance on this dataset is closest to the baselines, therefore, this provides a worst-case analysis for all comparisons. The detail-focused person will observe that the presented values of the default TreeVAE configuration, denoted by "base", are marginally lower(<1%p) than in the paper, which is due to the fact that we trained for slightly shorter, in order to finish the rebuttal in time. All ablation experiments are run over $10$ seeds. Additionally, in Figure 1, we analyze the root embedding of our tree on FMNIST to support why Figure 4 (right) of the main paper clusters bags and trousers together with shoes (for a more detailed discussion, have a look at our response to reviewer bo43). Lastly, Figure 2 shows that the proposed splitting heuristic leads to a stable improvement in the ELBO. Please find the full bibliography utilized in all responses below. We are happy to address any further questions or comments from the reviewers to improve the paper and look forward to a fruitful discussion! [1] "Decision Jungles: Compact and Rich Models for Classification", Shotton et al., 2013 [2] "Twin Contrastive Learning for Online Clustering ", Li et al., 2022 [3] "NVAE: A Deep Hierarchical Variational Autoencoder ", Arash Vahdat and Jan Kautz, 2022 Pdf: /pdf/9befc7518c11e961c101ea83055cc9c038270c42.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces a new architecture for variational auto encoders with a binary tree structured generative model. The approach takes the architecture of the Ladder VAE, but introduces a binary routing variable at each stochastic layer that allows generation to continue down one of two possible paths. The authors derive an evidence lower bound objective for their approach and a simple heuristic for learning the structure of the tree at training time. In their experiments the authors compare the clustering and generative modeling performance on a variety of benchmark image datasets. Strengths: This seems to be promising work and the model introduced in this paper does appear to have a number of advantages. The model is clearly introduced and the objective function derived by the authors seems sound to me, though I haven't gone through the details in depth. Unsupervised clustering of complex, high-dimensional data is a challenging and relevant problem. Both the quantitative and qualitative evaluations of the clustering performance seem to show that the method does find interesting structures in the data that correspond to known class labels, without having access to these labels at training time. The generative performance is also promising, though the gains are unsurprising given that the model can have substantially more parameters than its competitors. The method is also straightforward to implement and apply in practice. Weaknesses: While I think this work is interesting, I do have a number of concerns: - The computational complexity of training seems to be an issue, as computing the training objective requires evaluating every path in the tree. This likely means that this model is limited to very small trees and is not applicable to discovering large numbers of clusters. Similarly, the number of parameters is large for the depth. If clustering was not a concern it seems likely that the computational complexity would likely be better spent simply creating a larger hierarchical VAE model, though this matched parameter count comparison is not explored in the work. I do appreciate that for generation, the complexity only depends on the depth. - I feel the work would benefit from further discussion and analysis of the tree-building routine. The proposed heuristic seems to work reasonably well, but is not well justified apart from the reasoning that the authors want to keep the tree balanced. This reasoning may fall apart if the true clusters are unbalanced. I would like to see discussion of how this choice interacts with the likelihood bound and whether the fact that some leaves can have significantly more generative capacity (as they are deeper in the hierarchy), has issues. - The VAE/network architectures used for the experiments are somewhat out-of-date. The ladder VAE is substantially outperformed in generative performance by hierarchical VAEs like the NVAE or "very-deep" VAE of Child. State-of-the-art generative performance isn't necessarily the goal of this work, but on the chosen datasets the model presented is not competitive. The VAE + agg clustering baseline could similarly be updated to these newer (pertained) models or applied to something like a vector quantized VAE. - The experiments are all on benchmark image datasets, which is fine, but I would be more interested in an application to a real-world dataset where the approach could provide new insights, rather than just rediscovering already known image labels. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How does the model perform if you start with a pre-specified tree structure? - Equations 3 and 5 suggest that $q(z_i)$ is only dependent on the parent of $z_i$ and not $x$, which is inconsistent with equations 6-9 and the diagrams. - Line 134, I'm unclear exactly what is meant by the routers having the same architecture as the generative model. Don't they depend on $x$? - Equations 19 and 20 don't use $l$ anywhere within the summation, so it's unclear exactly how multiple samples are used. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors do discuss the limitations appropriately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments and feedback! W1: We agree that a naive training algorithm would have a heightened computational complexity, however, due to the structure of the model we can alleviate this issue: Firstly, we can store the values of every visited node such that we need to compute each only once, making it linear in the number of nodes. Secondly, the subtrees of all nodes with the same depth are conditionally independent, which implies that their computation can be parallelized. This reduces computational complexity to linear in the depth of the tree. Furthermore, during the greedy growing procedure, most of the tree is frozen, shown in Fig. 3 (right) of the submission. Lastly, even the greedy growing procedure can be parallelized, as subtrees are again conditionally independent. Thus, with enough resources available, we are able to recover the computational complexity of an iteratively grown version of LadderVAE, while simultaneously being able to cluster the data. We do appreciate that if not enough resources are available, one has to omit a few of the aforementioned speed-ups, however, we argue that this is the case for any parallelizable system. Nonetheless, we agree that the effective number of parameters \textit{during training} is higher for TreeVAE than for LadderVAE, with the notable difference that TreeVAE offers lightweight inference. Therefore, we have additionally conducted an experiment with matched the parameter count of LadderVAE. The results for LadderVAE are |Method|Acc.|NMI |LL |RL|ELBO| |-|-|-|-|-|-| |LadderVAE|$56.4 \pm 1.9$|$62.3\pm2.0$|$-233.7 \pm 0.1$|$224.8\pm0.3$|$-238.9\pm 0.3$| |TreeVAE|$63.6 \pm 3.3$|$64.7 \pm 1.4$|$-234.7 \pm 0.1$|$226.5 \pm 0.3$|$-239.2 \pm 0.4$|. As expected, this big LadderVAE is performing slightly better generatively, however at inference time, LadderVAE does not offer lightweight inference that reduces the effective parameter count to what is denoted in the main paper. We are quite satisfied that the differences, especially in the ELBO, are only marginal, while the significant difference in clustering performance stays similar. Note that FMNIST was chosen as the worst-case dataset for clustering performance, so we would expect the differences to be bigger in the other datasets. W2: Regarding the proposed tree-building routine, we took inspiration from decision trees for the iterative, greedy growing procedure. Our empirical experiments suggest that the number of samples as simple splitting criterion supports a consistent improvement of the ELBO, as can be seen in Fig. 2 of the additional PDF, where we monitored the ELBO after the convergence of every subtree. An alternative splitting criterion would be reconstruction loss, for which we conducted experiments that result in a slightly worse test ELBO of $-239.3 \pm 0.2$. Thus, we believe our criterion is a good starting point on which future work, e.g. combining our work with [1], could measure their proposed improvements and we will make sure to include this in our Limitations section of the final version. Lastly, the fact that some leaves can have more generative capacity is an intended design choice because it gives the model more freedom to adapt to the data. Having the flexibility to find the right depth for each cluster gives the model more rather than less capabilities. W3: With respect to the generative capabilities of our work, we agree that it is not state-of-the-art, which is intentional, as this is not the focus of this work. Rather, we want to introduce a new model class that is able to perform generative hierarchical clustering. This is also reflected in our choice of datasets. We want to emphasize that the performance difference to NVAE [3] is not due to the theoretical model formulation but rather due to the chosen architecture and designs. We argue that NVAE is in some way a LadderVAE with smart design choices, where we see TreeVAE on the design-choice-level similar to LadderVAE and leave it up to future work, to focus on developing smarter and more involved design choices for our model, which would be an entire work in itself. Thus, we expect that similarly to how TreeVAE outperforms LadderVAE, a version of TreeVAE with highly optimized architecture would outperform NVAE. W4: Similar arguments to Weakness 3 can be made for the choice of datasets, however, we are currently working on a medical application of our model, as well as are in contact with practitioners in the field of robotics. We decided against the pursuit of including such applications in the paper in order to, given the space constraints, put more focus on the theoretical side of the method. Here, we would also like to note that Newsgroup20 is not an image but a text dataset and that the presented cluster enrichment of CelebA is intended as an exploratory data analysis, rather than for simply rediscovering categorical labels. Q: In response to your questions, we have conducted an additional experiment, where we a priori fixed the tree structure depicted in Fig. 4 (right) of the submission and trained the full tree without growing. The results are as follows: Acc. $35.1 \pm 4.8$, NMI $54.1 \pm 5.8$, LL $-237.4 \pm 0.4$, RL $228.9 \pm 0.5$, ELBO $241.0 \pm 0.5$. Expectedly, the results are much worse, as a fixed tree is more prone to local optima, providing support of our growing scheme. Regarding Eq. 3 \& Eq. 5, we adopted the notation of LadderVAE (see their Eq. 16) in order to be consistent. For Line 134, the routers of the inference model depend on the intermediate representation $\mathbf{d}_{depth(i)}$ and thus take as input a latent embedding and not $x$. Regarding the type in Eq. 19 \& Eq. 20, thank you for bringing it up, we have changed $z_i$ to $z_i^{(l)}$. We hope our general response and the response to your review address your concerns. We are happy to address any additional feedback or questions you might have to alleviate your concerns. --- Rebuttal Comment 1.1: Title: Thank you for the detailed response! Comment: Having read through the rebuttal and global response, I am willing to change score to recommend acceptance. The authors have address several of my concerns. I still feel that some of the shortcomings in terms of the architectures, datasets and tree-building routine are worth further exploration, but I'm inclined to agree they could be left for future follow-up work. I will be interested to see such extensions!
null
null
null
null
null
null
Adapting to Continuous Covariate Shift via Online Density Ratio Estimation
Accept (poster)
Summary: This work introduced an online density ratio estimation method that can adaptively update the model to minimize the risk accumulated over time in the continuous covariate shift scenario. This method is able to estimate density ratios between test and training samples when the test set is varying over time. Only a few unlabeled samples are required at each time. A theoretical analysis of the regret bound of the density ratio estimator is provided. Strengths: The paper is well written with clear justification and useful theoretical analysis. The proposed method relaxes the requirement of unbiasedness of prior work and does not need a large amount of unlabeled data at each time step. The authors also proved the dynamic regret bound of the density ratio estimator. The experiments demonstrate the effectiveness of the proposed method. Weaknesses: I don't have many complaints about this paper. One weakness is that some important experimental results are in the appendix which is not reasonable as the appendix is optional to read by reviewers. I understand the page limitation but I would suggest moving some preliminary content to the appendix instead. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I understand covariate shift is a well-established research area but the assumption of covariate shift (p(y|x) not change) is quite strong, so how can we verify or guarantee this assumption is valid in a real-world problem? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Since this method is for continuous covariate shift, the inherent limitation would be the assumption of covariate shift. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your great appreciation for our work and the helpful comments! In the following, we will address your questions. We will further improve the paper according to your suggestions. --- **Q1:** important experimental results are in the appendix which is not reasonable as the appendix is optional to read by reviewers **A1:** Many thanks for your careful review and constructive comments. In the next version, we will try to move the empirical results into the main text. We believe this is very feasible, given that an additional page is allowed in the camera-ready version. Thanks! --- **Q2:** but the assumption of covariate shift (p(y|x) not change) is quite strong, how can we verify or guarantee this assumption is valid in a real-world problem? **A2:** We appreciate your insightful comment on the covariate shift condition and acknowledge that it is somewhat a strong assumption. However, it is important to note that the unsupervised distribution shift adaptation problem is inherently challenging, and it is generally hard to perform the adaptation without any assumptions. The covariate shift condition is one of the most fundamental assumptions in the study of learning with distribution shift, which has also been successfully employed in many real-world applications (e.g. [8,9,10]). There are several possible directions to relax the requirement on the covariate shift assumption. When no label information about the test distribution is available, we could explore more complex modeling of the distribution shift, such as considering the case where both covariates and labels shift (Chen et al., 2022). Another interesting direction is to study how to efficiently test the covariate shift assumption with limited labeled data from the test distribution, particularly in the continuous shift setting. Our study for the basic covariate shift setting may serve as a foundational step toward addressing more complex real-world distribution shift cases. Thank you for the comments; we will include a more detailed discussion about potential ways to generalize the covariate shift assumption in the next version. **Ref**: Chen et al. Estimating and Explaining Model Performance When Both Covariates and Labels Shift. In NeurIPS 2022.
Summary: This paper proposes an online density ratio estimation method to adaptively train a predictor in the scenario of continuous covariate shift. The proposed method estimates the density ratio between the training and testing distributions using a small number of unlabelled samples and updates the predictor using a weighted empirical risk minimisation algorithm. The paper provides a clear and comprehensive explanation of the problem of continuous covariate shift and the challenges it poses. The paper also provides a thorough review of related work and explains how the proposed method builds on and improves existing methods. The authors also give a dynamic regret bound, which finally leads to an excess risk guarantee for the predictor. The regret bound is minimised by designing an online optimisation process to minimise the dynamic regret defined over the observed loss. The paper explains how to optimise the dynamic regret with the online ensemble framework developed in recent studies of non-stationary online convex optimisation. The proposed method is validated through empirical results on synthetic and real-world datasets, demonstrating its effectiveness compared to other algorithms. Overall the paper appears technically sound, is easy to follow, and gives theoretical guarantees. The paper is less strong from the empirical perspective. Further, the setting is a little esoteric (offline training followed by online adaptation for models where density ratios can be used), and may be of limited appeal to a wider audience. ---- Post rebuttal increasing my score from 6 to 7 Strengths: Formulating the problem using the Bregman divergence was a useful tool to be able to generalise some existing methods, and also unlocked the analysis that follows. Disentangling the model training and importance weight estimation also allowed for adaptation to the continuous shift setting and abiility to change importance weight estimators. Weaknesses: In terms of empirical study, this is limited to a study using synthetic data, and a small study on the yearbook dataset. For the synthetic scenario, the nature of the shift - sinusoidal and square waves altering a convex combination of distributions for the first two, then linear and “Ber” (which is not fully described in the main text, but is samples from a Bernoulli distribution) seem rather simplistic and artificial. For the single real world experiment, it’s not clear how the hyperparameter settings for the various methods were chosen, so it’s hard to know if this result is cherry-picked or robust. Another thing that’s not clear is whether the intervals chosen for the ensemble members match up with the period of change in the synthetic experiments. If they do, it’s not a surprise that they do well, but would require knowledge ahead of time. Figures 2-5 are extremely small and hard to read when printed. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: How sensitive is the method to the intervals in the ensemble members? How crucial is the ensemble to the method overall? Am I right in thinking that the bound does not take account for the presence of the ensemble. Does this mean that the bound actually holds for the algorithm as employed in experiments? Could you plot the bound for one of the synthetic examples to show how tight it is? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: The authors did not address limitations or potential negative social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful comments. We will address your questions below and improve the paper according to your suggestions. To better present additional experimental results, we include them in a PDF attached with the global response. --- **Q1:** empirical study is limited to a study using synthetic data, and a small study on the yearbook **A1:** We understand the reviewer's concern regarding the distribution shift generating process and scale-up issue to large data in experiments. However, we would note that the main contribution of this work is to propose the **first theoretically grounded solution** for continuous distribution shift problem. Our experiments are designed to verify the effectiveness of the method, particularly its ability to reuse historical information properly. Our experimental design follows the **same** settings used in existing works on continuous label shift [1,2], where $\mathcal{D}_t$ is taken as a mixture of two distributions with different shift patterns. We also follow the convention to conduct experiments in three parts: **(1) synthetic data**, to provide controlled illustrations; **(2) four benchmark datasets**, to enable comparative evaluation (Regrettably, the results are reported in Tables 1 and 2 in Appendix A due to limited space); and **(3) real-world scenarios**, to demonstrate real-life applicability. While we acknowledge that more extensive real-world validation would be ideal, we believe that the experiments conducted have already shown the effectiveness of our method and supported the theoretical findings. In the future, we will find more real-world scenarios satisfying covariate shift assumptions and explore further results. [1] Online adaptation to label distribution shift. NeurIPS'21 [2] Adapting to online label shift with provable guarantees. NeurIPS'22 --- **Q2:** hyperparameter settings for the various methods **A2:** For each individual algorithm, we use **the same rule in all experiments** to select the parameters. - For the contenders (DANN, KMM, KLIEP, and uLSIF), the parameters are set to the default settings as detailed in the corresponding ADAPT python package. - As for our algorithm (Accous), we provide detailed parameter configurations in lines 670-688, Appendix A.2. Specifically, we need to specify the parameters $R$ and $S$ for the logistic regression. Here, $R$ is set as the maximum feature norm in the offline data. Meanwhile, $S$ is the maximum norm of density functions; we set it as $S = d/2$, where $d$ is the dimension of the feature. OLRE uses the same parameter configuration as Accous. In the revision, we will include the parameter setup in the main text for clearer accessibility. Thanks! --- **Q3:** Figures 2-5 are extremely small **A3:** Thanks for the comment. We will enlarge the figures to enhance readability in the revision. --- **Q4:** whether the intervals chosen for the ensemble members match up with the period of change **A4:** In all performance comparison experiments, both on synthetic (Figure 2&4) and benchmark data (Table 1&2), the distribution change period **does not align with** the intervals of the ensemble members. These intervals are determined by the theoretical guidance, i.e., $|I_k| = 2^k$. By contrast, the period of distribution change is set as $M = \sqrt{T} = 100$ for $T =10000$, as mentioned in lines 612-614, Appendix A. **Figure A** in the PDF reports the additional results for the weight assign experiment (Figure 3 in the paper) in the case where the ensemble member interval does not match the shift period $M$. The results show our method can still assign larger weights to the "right'' ensemble members whose interval lengths are close to $M$. --- **Q5:** How sensitive is the method to the intervals in the ensemble members? **A5:** We conducted additional experiments to investigate the sensitivity of ensemble members' intervals length, increasing them by factors of 3 and 5, i.e., $\vert I_k\vert = 3^k$ and $\vert I_k\vert = 5^k$. The results, presented in **Table A** of the PDF, show that our algorithm is generally robust to variations in the interval length. --- **Q6:** How crucial is the ensemble to the method overall? **A6:** The ensemble structure is the core component to ensure our method's adaptivity. Intuitively, one of the main challenges of the continuous covariate shift is the unknown shift intensity $V_T$. To address the problem, we maintain multiple ensemble members to account for possible shift intensities of the environments and employ a meta-algorithm to combine them. The intuition is **supported by our theoretical analysis**, as shown by Lemma 10 in Appendix D.3, the ensemble structure ensures us to track the best ensemble members on each interval, which finally leads to the dynamic regret bound. **From an empirical standpoint,** Figure 3 further shows the importance of our ensemble structure, illustrating how it helps to selectively reuse the right amount of historical information according to the (unknown) shift intensity. --- **Q7:** ...the bound does not take account for the presence of the ensemble? **A7:** Our bound explicitly accounts for the presence of ensemble structure, encompassing both the meta-algorithm and the ensemble members with interval schedule $|I_k| = 2^k$. As discussed in Q6, the ensemble structure is the core component to achieve our theoretical guarantees (Theorem 2&3). --- **Q8:** plot bound for one of the synthetic examples to show how tight it is **A8:** Thank you for the comment. We have provided a detailed discussion about the tightness of our results in lines 311-321, with further details in Appendix D.6. From a theoretical view, we show that the rate can hardly be improved, even if one can receive labels of the test stream after prediction. We further conducted experiments to show the consistency between our theory and empirical results. Please refer to **Figure** **B** in the PDF for more details. Thanks!
Summary: This work studies the continuous covariate shift problem, where there exists an initial labelled dataset and in every subsequent round, a new unlabelled dataset is revealed. One needs to adapt the model for every round to achieve good performance. The paper uses importance-weighted ERM where the weights are estimated based on Bregman divergence minimization (Sec.3.3) and online ensemble (Sec.4.1). Theoretical analysis shows that the estimation has a linear dynamic regret (Theorem 2) and corresponding IWERM has an average excess risk that depends on problem instances (Theorem 3). Empirical studies on synthetic and real-world data show that the proposed method can outperform several alternatives. Strengths: - Clear writing and easy-to-follow presentation - Strong theoretical guarantees on both the ratio function (Theorem 2) and the learned model (Theorem 3) - Convincing empirical demonstrations on both synthetic and real-world datasets Weaknesses: - The algorithm and analysis rely on the assumption of using linear prediction models (see, e.g., Eq.(2) where the prediction is linear in w). Linear models are hardly sufficient in many cases, and more powerful models are needed. - The work of Baby et al (2023), albeit very recent, should be discussed as it achieves a similar bound to the current work in a different setting. - Baby, D., Garg, S., Yen, T.C., Balakrishnan, S., Lipton, Z.C. and Wang, Y.X., 2023. Online Label Shift: Optimal Dynamic Regret meets Practical Algorithms. *arXiv preprint arXiv:2305.19570*. - A somewhat minor point to mention is that in the experiment, the description of the Yearbook dataset is not very clear. The appendix only mentions 10 images per round, but it remains unclear how the images are sequentially sampled. Some minor comments - R_t should be defined explicitly after Eq.(1) to avoid confusion (since it is the population version, different from (2), the empirical version) - L173: Ref [37] doesn’t mention UKL and ref [25] is KLIEP instead of KLLEP - L262: latter -> later - L305: minimiers -> minimizers - L336: KEIEP -> KLIEP Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Q1: Are there any possible ways to extend the current work beyond linear models? Q2: How are the images being sampled for the Yearbook dataset? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The assumption of using linear models should be explicitly mentioned. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your great appreciation and bringing the concurrent work to us! In the following, we will address your questions. We will further improve the paper according to your suggestions. --- **Q1:** Are there any possible ways to extend the current work beyond linear models. **A1**: We believe it is quite feasible to extend the current work beyond linear models. Our algorithm consists of two integral components: IWERM for predictor training and an online ensemble for density ratio estimation. - **For the predictor training part,** since this is an ERM-based method, we can extend our approach to learn within a more complex hypothesis space beyond the linear class, as long as Equation (2) can be minimized. - **For the density ratio estimation part,** if theoretical guarantees are not a primary concern, our ensemble algorithm can assuredly be extended to learn with more intricate models, such as deep neural networks. When considering theoretical guarantees, the extension is not as straightforward, given that our method is based on the online convex optimization framework. However, we believe there are many viable opportunities. For instance, Example 1 demonstrates that our model can already be implemented using a generalized linear model. Besides, it is feasible to extend our online ensemble framework to learn within the Reproducing Kernel Hilbert Space (RKHS), by leveraging the recent advances in online kernel learning [Calandriello et al., 2017; Zhang et al., 2019]. Furthermore, for more complex models like DNNs, although analyzing the nonlinear neural networks is always challenging, it could be an intriguing direction to consider combining the neural tangent kernel theory into the online learning process. Thank you for raising this point. We will add more discussion in the next version! Reference: [1] Daniele Calandriello, Alessandro Lazaric, and Michal Valko. Efficient second-order online kernel learning with adaptive embedding. In NeurIPS 2017. [2] Xiao Zhang and Shizhong Liao. Incremental randomized sketching for online kernel learning. In ICML 2019. --- **Q2**: about the online label shift paper from Baby et al., 2023 **A2:** Thank you for bringing the concurrent paper on the online label shift to us. We will add a discussion to the paper in the next version. --- **Q3:** How are the images being sampled for the Yearbook dataset? **A3:** We generate the unlabeled data stream from the Yearbook dataset based on the inherent timestamps. Within the Yearbook dataset, each image is associated with a specific year. We meticulously reorganized the data to align with the chronological sequence of image years, introducing controlled randomness among images from the same year. Based on this process, we can generate coherent, sequential data that faithfully represents the passage of time with natural distribution shifts. We will provide a more detailed description of the yearbook dataset in the revision. Many thanks! --- Rebuttal Comment 1.1: Comment: Thanks for the explanations. I would suggest adding the discussion about expansions beyond linear models to the paper to inspire future research. --- Reply to Comment 1.1.1: Comment: Thank you for the constructive comment! We will add a discussion about the expansions beyond linear models in the next version.
Summary: This paper focuses on deriving theoretical bounds for online density ratio when there exists continuous covariate shift. The formulation is based on the importance-weighted empirical risk minimization, which is a conventional one for covariate shift adaptation. The paper chooses the Bregman Divergence Density Ratio Matching as the method and tries to bound the regret of the to a dynamically changing optimal density ratio. The results are first established for a general convex function class and then instantiated to a logistic regression model. The online ensemble method proposed mimics the previous continuous label shift work. Experiments are conducted on four different synthetic shift patterns and mint/cifar datasets. Strengths: Pro: The paper studies an important problem, is very clearly written, and has solid results. Weaknesses: Con: I am not sure how much the first part of the analysis adds to our knowledge about the continuous covariate shift. This is maybe my bias. To my understanding, if we are in this kind of online learning scenario (shift is changing continuously), the minimization of the cumulative dynamic regret is a very straightforward choice. The question is how to minimize it and whether the theory guides the algorithm design. So, bounding the empirical estimation error using the approximated cumulated regret (theorem 1) is not very informative to me. Also, the paper shows that many density estimator function satisfies the assumption and can achieve the bounds. So the main novel part of the proposed methodology seems to be the twist to the FLH algorithm? Theoretically, even though it is nice to see a general and relatively standard algorithm can achieve the minimax optimal guarantee for the online density ratio estimation under continuous covariate shift, the theoretical contribution seems to be a bit limited: the high probability result (instead of the expected), and the greatly simplified analysis. The experimental results can be presented in a better way. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Questions: It would be nice to see different shift change patterns and how they affect the learning results. But it seems the errors are averaged before showing? Figure 2 only shows different methods but covers 4 kinds of shifts? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There can be more discussions about limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed comments. In the following, we will first highlight the contribution of our work (Q1 and Q2) and then address your concern about the experiments (Q3). We will improve our paper according to your comments. --- **Q1:**“I am not sure how much the first part of the analysis adds to our knowledge about the continuous covariate shift…” **A1:** While the conversion from density ratio estimation to regret minimization might appear intuitive (for those who are very familiar with both IW-ERM and density ratio estimation), we believe our first part of the analysis is *still novel and interesting to the community*, given that the study presents the **first theoretically grounded framework** for the continuous covariate shift problem. We emphasize the contribution of our framework through the following two points: - **Novel framework:** It is noteworthy to mention that existing approaches (e.g., [18], the closest work to ours proposed for continuous label shift) *cannot be applied to the continuous covariate shift problem*. As discussed in line 63-67, the previous work typically follows the framework of first constructing an unbiased risk estimator $\hat{R}_t(\mathbf{w})$ and then conducting online optimization over $\hat{R}_t(\mathbf{\mathbf{w}})$ to learn the predictor. However, the construction of $\hat{R}_t(\mathbf{w})$ relies on the unbiasedness of the importance weight function such that that $\mathbb{E}[\hat{r}_t] = r_t$, which is hard to satisfy in the continuous covariate shift problem (e.g., $\mathbb{E}[\hat{r}_t] \neq r_t$ in Example 1). The first part of our analysis provides us a novel framework to handle continuous covariate shift. - **Flexibility:** Our framework is very flexible and holds the potential to serve as a principled way to manage more general continuous distribution shift scenarios. For instance, applying our framework to the continuous label shift problem immediately yields a new approach that *does not require the construction of an unbiased estimator*. Besides, with the disentanglement of predictor training and the IW estimation process, our framework supports the use of more complex hypotheses to train the predictor $\mathbf{w}$, while the model used in [18] is limited to a linear model. We thank the reviewer for the comments. In the next version, we will further emphasize the novelty and flexibility of our framework for the continuous covariate shift problem. --- **Q2:** the theoretical contribution seems to be a bit limited: the high probability result (instead of the expected), and the greatly simplified analysis. **A2:** We respectfully disagree with the comments and would like to take this opportunity to emphasize our contributions in the regret analysis part. - First of all, we would like to highlight that the performance measure we investigate serves as an intermediate case between worst-case dynamic regret and universal dynamic regret as detailed in lines 750-765, Appendix B.1. The intermediated regret receives **less attention in standard OCO literature**, but is very **importance for the continuous distribution shift problem.** Using the previous result on universal dynamic regret [19] in a black-box way can only imply an expected bound. In contrast, We contribute to provide a high-probability bound with greatly simplified analysis (without involving KKT conditions) for this less-explored yet practically significant intermediate setting. - Furthermore, our high-probability bound is obtained **non-trivially**. As outlined in lines 766-783, Appendix B.1, we strategically modify the Hedge algorithm into Adapt-ML-Prod. This alteration enables us to effectively control the generalization gap between $\hat{L}_t$ and $\tilde{L}_t$ by leveraging the negative term introduced by the exp-concavity of the loss function. Additionally, while the primary focus of this paper is on the continuous covariate shift problem, our simplified analysis is also applicable for general OCO purposes when the minimizers lie in the interior of the decision set. This approach can be of independent interest for other online learning problems. Due to page limits, we have to place much of the content in the appendix (especially the dynamic regret discussion in Appendix B.1). However, we greatly value your comments and will carefully revise the paper, making sure to emphasize those points more clearly in the main text. --- **Q3**: It would be nice to see different shift change patterns and how they affect the learning results. But it seems the errors are averaged before showing? Figure 2 only shows different methods but covers 4 kinds of shifts? **A3**: We have detailed the performance of all contenders across four different types of distribution shifts in Tables 1, 2, and 3. The results show that our algorithm can adapt to different kinds of shifts. Regrettably, due to space limitations, we have to place these results in Appendix A.1 for this version. In the next version, we will include some of these empirical findings within the main text. We believe this will be quite feasible, especially considering that an additional page is permitted in the camera-ready version. Thank you!
Rebuttal 1: Rebuttal: We sincerely appreciate insightful comments and the positive feedback from all reviewers for this paper. In the rebuttal period, we conducted additional experiments to further support our claim (particularly to address the concerns from Review Ls7D), as presented in the attached PDF file. The experimental setup and results are listed as follows. To provide a better readability of the results, we have also included the text in the PDF files. Thanks! --- **Experiment Setup for Figure A.** In response to Q4 (Reviewer Ls7D), we conduct weight assignment experiments in the case where the interval lengths of ensemble members **do not match** the period of distribution shift. The experimental setup and performance measure is the same as that in Figure 3 except that the distribution shift period is changed to $M = 10, 50, 100, 200, 400, 800$. **Experiment Result for Figure A.** Figure A shows that our meta-algorithm can still assign larger weight to the "right'' ensemble members whose interval length is close to the distribution shift period ($M$) even they are not exactly matched. --- **Experiment Setup for Table A.** In response to Q5 (Reviewer Ls7D), we provide the performance comparison on the synthetic dataset. The experimental setup and measure is the same as that in Figure 1 except we additionally report the performance of Accous when the interval length of the ensemble members are increased by the factor of 3 and 5, i.e., $\vert I_k\vert = 3^k$ and $\vert I_k\vert = 5^k$, respectively. **Experiment Result for Table A.** Table A shows the averaged error over $T=10000$ iterations. The results show that our method is generally robust to the interval lengths of ensemble members. --- **Experiment Setup for Figure B.** In response to Q8 (Reviewer Ls7D), we empirically plot the order of the averaged excess risk suffered by our algorithm and compare it with the result established in Theorem 3. We conduct experiments on the synthetic dataset with the $\texttt{Lin}$ shift ($V_t = \Theta(1)$). To empirically measure the order of the averaged excess risk, we introduce the **"risk ratio''** $\rho_t(p)$, defined as $\rho_t(p) = \mathfrak{R}_t\times t^{p}$ for each time $t$, where $p\in(0,1)$ is a constant, and $\mathfrak{R}_t$ is the averaged excess risk at $t$. If the risk ratio of an algorithm decreases over the horizon, it means the convergence rate of risk is at least $O(T^{-p})$. Otherwise, the risk order is slower than $\Theta(T^{-p})$. **Experiment Result for Figure B.** Figure B shows the risk ratio of our algorithm. As suggested by Theorem 3, the excess risk of our algorithm should converge at least at the rate of $O(T^{-1/3})$ for the $\texttt{Lin}$ shift. The green line shows that the risk ratio $\rho_t(1/3) = \mathfrak{R}_t \times T^{1/3}$ decreases in the $\texttt{Lin}$ shift scenario, indicating that the algorithm empirically attains a convergence rate faster than $O(T^{-1/3})$, which is consistent with our theory. We further note that the risk ratio $\rho_t({7}/{12})$ (red line) also slightly decreases along the horizon, indicating that the algorithm converges slightly faster than $O(T^{-7/12})$ empirically, thus quicker than $O(T^{-1/3})$. However, this result does not imply that our bound is loose. Theorem 3 essentially provides the **worst-case guarantee**: our algorithm is guaranteed to attain the $O(T^{-1/3})$ rate in any pattern of distribution shift when $V_T = \Theta(1)$. But, it is also possible to achieve a better convergence rate in benign environments. In the online learning literature, a bound that can adapt to benign environments is known as a problem-dependent bound, which requires more advanced techniques to attain. In this paper, we primarily consider the worst-case guarantee and will address how to achieve the problem-dependent bound in future work. Pdf: /pdf/ae800fe1cfb58362ad1984ca3f39fb52b51e3d83.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Mixed Samples as Probes for Unsupervised Model Selection in Domain Adaptation
Accept (poster)
Summary: This paper proposed a new framework to validate the domain-adapted model through Mixed samples. Leveraging mixed samples is not new, but application to the domain-adapted model validation is novel. The framework is very general, so it can be combined with any adaptation methods. Experiments are very extensive over various domain adaptation methods over different validation methods including SND and Corr-C, and the authors provide the supporting evidence as much as possible. In many cases, the proposed framework shows superior performance. Strengths: In the domain adaptation scenario, the target labels are generally not available. The performance of the adapted model is not typically very robust, so we might want to try the adaptation several times, but we do not know which model is better. So a good model validation framework is necessary. Therefore, the problem is well-defined and well-motivated in practice. The main strengths of this paper are 1) simplicity, 2) generalizability, and 3) superior performance. In addition, the claimed arguments are well-supported by extensive experiments, including Segmentation and Backbone change L301-304. Weaknesses: The main weakness of this paper is (as the authors also mentioned) the lack of theoretical analysis. This paper would add more value if the authors provided meaningful analysis to support the framework. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. L194-195: How can we calculate ICE score as an accuracy between $y^{i}$ and $\hat{y}^{i}=(...,1/2,...,1/2,...)$ (for example) because $\hat{y}$ is one-hot pseudo-label according to L189? 2. L244: Any more justification on why $\lambda=0.55$? How's the performance variation depending on the choice of $\lambda$? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: I'm on the positive side, but without looking at the codes and theoretical justifications, the reproducibility cannot be fully verified. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are sincerely grateful for both your recognition of our contributions and your valuable and comprehensive feedback. We've taken each of your concerns into account and have provided detailed responses below. > **Q1**: The main weakness of this paper is (as the authors also mentioned) the lack of theoretical analysis. This paper would add more value if the authors provided meaningful analysis to support the framework. **A1**: Thank you for your constructive suggestion. We acknowledge rigorous theoretical analysis of our method could shed more light on the challenge of model selection in domain adaptation. We recognize this as an avenue for future work. Although our current submission lacks extensive theoretical analysis, we have diligently compensated by offering an in-depth empirical analysis of our validation method, MixVal. This analysis is further elaborated in Appendix B and is accompanied by new comparisons, including **a comparison with the combination of Entropy [44] and SND [27], as depicted in Table 1 of the attached PDF in the global rebuttal.** > **Q2**: L194-195: How can we calculate ICE score as an accuracy between $y^i$ and $y^i = (...,1/2,...,1/2,...) $ (for example) because $\hat{y}$ is one-hot pseudo-label according to L189? **A2**: The notation $\hat{y}$ denotes the pseudo label predicted by the fixed UDA model. In particular, $\hat{y}_t^i$ signifies the pseudo label for target sample $x_t^i$. During mixup, when we combine target samples $x_t^i$ and $x_t^j$ to create a mixed sample, we utilize the hard one-hot labels from $\hat{y}_t^i$ and $\hat{y}_t^j$ along with a mix ratio exceeding 0.5. This choice ensures that the interpolated label of the mixed sample **avoids the scenario of (...,1/2,...,1/2,...)**. When computing the ICE score for this mixed sample, we predict its hard pseudo label using the fixed UDA model and compare it against the interpolated label. This comparison determines whether the mixed sample is accurately or inaccurately predicted. > **Q3**: L244: Any more justification on why $\lambda=0.55$? How's the performance variation depending on the choice of $\lambda$? **A3**: The mix ratio $\lambda$ in mixup controls the level of ambiguity of mixed samples. A common heuristic is that a value close to 0.5 for $\lambda$ promotes more ambiguous in-between samples, potentially possessing stronger discriminatory capabilities. In contrast, a value approaching 1.0 generates simpler samples with lower discriminatory potential. This principle is supported by the results from Figure 3, where performance at $\lambda=0.9$ is notably suboptimal. Hence, we set $\lambda=0.55$ for all our experiments to ensure stable validation performance. > **Q4**: without looking at the codes and theoretical justifications, the reproducibility cannot be fully verified **A4**: Our MixVal approach is straightforward, requiring no extra model re-training or extensive hyperparameter tuning. We've included detailed PyTorch-style pseudocode in Appendix A, covering every algorithm detail and step. Furthermore, we'll release our full code, including model training and evaluation, to ensure robust reproducibility of our results. --- Rebuttal Comment 1.1: Comment: I appreciate the authors taking the time to answer all questions during the rebuttal. My questions and concerns are adequately addressed. I keep my score as-is. --- Reply to Comment 1.1.1: Title: Thanks for your support Comment: Thank you for your exceptionally prompt feedback and unwavering support of our paper. We are glad to observe that our responses have effectively resolved your concerns. We are committed to integrating the suggestions from all reviews into our revised manuscript and ensuring the reproducibility of our work through the release of our code. Your invaluable input has undeniably enhanced the quality of our paper, and we sincerely thank you for your dedication and time.
Summary: Validating hyperparameters in UDA is challenging due to the unlabeled target data. For it, the paper proposes MixVal, a novel target-only approach that utilizes mixup to synthesize target samples for validation. MixVal combines inductive biases from prior approaches through intra-cluster and inter-cluster mixup, achieving state-of-the-art performance across 11 UDA methods and 4 adaptation settings. Strengths: 1. Validation is crucial in UDA to utilize it in real-world problems, and this paper proposes an effective method to tackle it. 2. Experimentally, it is clearly robust and effective to model selection across multiple benchmarks. Weaknesses: 1. I can't find methodological novelty. I think that the proposed method is a combination of Entropy, SND, and Mixup. 2. There are several papers utilizing Mixup in UDA. It requires not simply mentioning those papers, but rather providing a detailed summary and the differences from them. (e.g., CoWA-JMDS [1]) 3. Since Mixup is used for intra-cluster and inter-cluster augmentation, it is possible to utilize other famous Mixup variants like Manifold Mixup [2] or CutMix [3]. However, there are no relevant experiments conducted in this regard. [1] Lee, Jonghyun, et al. "Confidence score for source-free unsupervised domain adaptation." International Conference on Machine Learning. PMLR, 2022. [2] Verma, Vikas, et al. "Manifold mixup: Better representations by interpolating hidden states." International conference on machine learning. PMLR, 2019. [3] Yun, Sangdoo, et al. "Cutmix: Regularization strategy to train strong classifiers with localizable features." Proceedings of the IEEE/CVF international conference on computer vision. 2019. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please provide an in-depth response regarding the weaknesses mentioned above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: This paper addresses the important research topic, model/hyperparameter validation in UDA. I think that this work can be meaningful to the UDA community. However, it is unclear whether the proposed method is the most optimal approach for validation and even as a work of using Mixup. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for the constructive comments. We have addressed all of your concerns in a detailed manner as outlined below. > **Q1**: I can't find methodological novelty. I think that the proposed method is a combination of Entropy, SND, and Mixup. **A1**: We **respectfully disagree with this assertion**. Our MixVal methodology stands apart from both Entropy and SND. Unlike Entropy and SND, which directly measure overall raw predictions across all target samples, MixVal takes **a distinct approach by using mixed samples to actively probe the learned target structure**. Furthermore, MixVal introduces **a novel analysis of high-level inductive bias** that differentiates it from existing works in UDA model selection. Regarding the use of mixup, we acknowledge that it's a fundamental data augmentation technique widely applied in various tasks, as detailed in Section 3.2. **The inclusion of mixup does not necessarily undermine novelty, especially considering its extensive influence—cited over 7200 times.** We emphasize that **the standard mixup is typically applied during the training stage with a mix ratio close to 1.0** ($\lambda \in \text{Beta}(\alpha, \alpha), \alpha \in [0.1, 0.4]$) for regularization effect, and tends to suffer from **performance degradation when the mix ratio is near 0.5 due to manifold intrusion [A, B]**. In contrast, our **MixVal harnesses mixup during the inference stage to explore the learned target structure**. This is executed with a fixed mix ratio close to 0.5. **These differences significantly set MixVal apart from other techniques including Entropy, SND, and mixup.** **From the perspective of experimental performance**, we provide a direct comparison **between Entropy+SND and our MixVal in Table 1 of the attached PDF in the global rebuttal**. Our findings reveal that **the combination of Entropy and SND fails to** demonstrate superior performance over either individual method, while consistently, **MixVal significantly outperforms Entropy+SND** across various tasks. > **Q2**: There are several papers utilizing Mixup in UDA. It requires not simply mentioning those papers, but rather providing a detailed summary and the differences from them. (e.g., CoWA-JMDS [1]) **A2**: We appreciate your reference to CoWA-JMDS [1]; we will certainly integrate it into our discussion on related research. It's important to emphasize that **our submission markedly differs from this specific paper**. While the mentioned work contributes to enhancing model performance within UDA, our focus centers on the often-overlooked yet significant model selection challenge in UDA. We've already included references to numerous UDA and semi-supervised learning papers that employ Mixup in L132. Additionally, we have **conducted comprehensive experiments employing MixVal for model selection with the UDA method DMRL [32]**, providing validation results in Table 8 and thorough analysis in L286-L289. This experiment demonstrates that our MixVal approach remains effective for UDA methods utilizing mixup for model training. > **Q3**: Since Mixup is used for intra-cluster and inter-cluster augmentation, it is possible to utilize other famous Mixup variants like Manifold Mixup [2] or CutMix [3]. However, there are no relevant experiments conducted in this regard. **A3**: We appreciate your insights. To clarify, **we've already conducted experiments wherein we substituted mixup with Manifold Mixup and CutMix**. We provide a comparison of the **results in Figure 2 (a) and detailed analysis in L274-L278**. Notably, our findings indicate that image-level mixup outperforms other consistency regularizations including Manifold Mixup and CutMix. [A] MixUp as Locally Linear Out-Of-Manifold Regularization, AAAI 2019 [B] On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks, NeurIPS 2019 --- Rebuttal Comment 1.1: Comment: Thanks the authors for feedback. In terms of novelty and contribution, the proposed concept still appears to be a synthesis of Entropy, SND, and Mixup. While I am not averse to favorable assessments, my evaluation will remain same. --- Reply to Comment 1.1.1: Title: Addressing your remaining concern on novelty Comment: Thanks for the feedback. Regarding your remaining concern "In terms of novelty and contribution, the proposed concept still appears to be a synthesis of Entropy, SND, and Mixup," we aim to address this comprehensively. --- --- ### **Our contributions** - Our paper addresses the **fundamental problem** of unsupervised model selection in domain adaptation, which remains **underexplored and open**. - We are **the first** to tackle this challenge by **directly investigating the structure** learned for the unlabeled target domain. Specifically, we introduce a **novel target-only** model selection method named **MixVal**, which employs mixup for **both inter-cluster and intra-cluster probing**. - In comparison to existing studies like SND [27] and Entropy [44], we offer **much more extensive experimental results**. These results **demonstrate** that our method, **MixVal**, stands out as **the only stable approach** consistently achieving **state-of-the-art** performance across a variety of tasks. We appreciate the **reviewer's acknowledgment of the two contributions (1 & 3)** made by our paper and note that there remains a concern solely regarding the second contribution, regarding our method MixVal. ___ ___ ### **Our novelties** #### **Regarding mixup,** it's important to emphasize that mixup just serves as a technique for generating probing samples within our model selection method, MixVal. However, **the novelty of MixVal extends far beyond the utilization of mixup**. Still, the **clear differences** between **how we use mixup and the training-stage mixup [30]** are explained as follows: - **Different purpose and stage**: In **mixup [30]** and its subsequent applications [31, 32, 33], mixup functions as **a regularization technique** applied during the **training stage** to train a model. Conversely, in **our paper**, mixup operates as **a probing sample synthesis technique** and is integrated into MixVal during the **inference stage** to evaluate a model. - **Different mixup strategy**: Many existing applications of mixup **[30, 31, 32, 33] primarily use inter-cluster mixup**, focusing on mixing samples of different classes. In contrast, driven by our novel motivation of probing, **we** are the first to **explicitly** consider and **differentiate** between **inter-cluster** mixup and **intra-cluster** mixup for model selection. - **Different mix ratio $\lambda$**: For all existing training-stage mixup applications **[30, 31, 32, 33]**, the mixup operation is **only effective** with a mix ratio **$\lambda$ near 1**, while **ineffective** with a mix ratio **$\lambda$ near 0.5** due to the **manifold intrusion problem [A, B] in our rebuttal**. In contrast, our **MixVal** method demonstrates, through Figure 3 experiments, that a mix ratio **$\lambda$ near 1 is ineffective**, whereas a mix ratio **$\lambda$ near 0.5** yields **better** results. --- #### **Regarding SND [27] and Entropy [44],** we demonstrate the novelty of our MixVal over these competing model selection methods as follows. - **Novel analysis**: As shown in **Table 1**, we introduce a fresh and comprehensive analysis of the **inductive bias used in** existing target-only validation methods **[26, 27, 44, 45]**. This brings forth a **new insightful understanding** of these methods within the domain adaptation **community**. - **Novel methodology**: We are **the first** to solve the model selection problem in UDA from **a new probing perspective**. Our MixVal method employs two types of mixed samples to probe the trained model, inherently considering **two** advantageous inductive biases for the **first time**. In contrast, **based on our novel analysis**, both **SND and Entropy** utilize **raw target samples** to evaluate a **singular** inductive bias. - **Novel performance**: Compared with Entropy and SND, we significantly **broaden the empirical evaluation scope** to include open-partial-set DA and source-free DA for **the first time**. In addition, Our **MixVal consistently surpasses both in model selection stability and performance**. As suggested by the reviewer, we contrast MixVal with the combined method **"Entropy+SND" in Table 1 of our rebuttal PDF.** The results indicate that the combination of Entropy and SND falls short of improving over either alone. In contrast, **MixVal consistently outperforms Entropy, SND, and "Entropy+SND"** across diverse tasks. This **strongly addresses the concern that "the proposed method is a combination of Entropy, SND".** We summarize the MixVal vs. "Entropy+SND" comparison as follows, with complete results available in Table 1 of our rebuttal PDF. | Method | ATDOC [25] | BNM [22] | PADA [15] | SAFN [21] | | :-----| ----: | :----: | ----: | :----: | | Entropy + SND | 62.16 | 56.26 | 61.27 | 70.52 | | MixVal (ours) | **68.26** | **66.30** | **67.57** | **71.41** | --- **Based on the compelling evidence presented above, we firmly support the novelty and contribution of our paper.**
Summary: In this paper propose a novel target-only method is proposed to employ mixup to synthesize in-between target samples for validation. MixVal leverages mixed target samples to directly probe the learned target structure, benefiting from an combination of inductive biases considered in prior approaches. MixVal performs intra-cluster and inter-cluster mixup to explicitly capture both inductive biases of neighborhood consistency and low density separation. Strengths: 1. This work propose a novel solution named MixVal that eliminates the need for source data access and avoids the cumbersome model re-training. Through inter-cluster and intra-cluster mixup, MixVal combines two essential inductive biases utilized in prior validation methods. 2. The experiment results achieve SOTA. Weaknesses: 1. The presentation need to be improved. 2. Using Mixup in domain adaptation is a common practice. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: DANCE (NeurIPS 2020) is not the SOTA of Open-partial-set UDA. How about the performance of some later work as listed below? [1] Guangrui Li et al. Domain Consensus Clustering for Universal Domain Adaptation. CVPR2021 [2] Kuniaki Saito et al. OVANet: One-vs-All Network for Universal Domain Adaptation. ICCV2021 [3] Liang Chen et al. Evidential Neighborhood Contrastive Learning for Universal Domain Adaptation. AAAI2022 [4] Liang Chen at al. Geometric Anchor Correspondence Mining with Uncertainty Modeling for Universal Domain Adaptation. CVPR2022 Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the positive feedback on our paper, particularly regarding its soundness and state-of-the-art experimental results. We have addressed your concerns as follows: > **Q1**: The presentation need to be improved. **A1**: Thanks for the constructive suggestion. We will revise our paper according to comments from all reviewers to improve our presentation. > **Q2**: Using Mixup in domain adaptation is a common practice. **A2**: We acknowledge the widespread use of Mixup in various domain adaptation methods. **However, it's important to clarify that our implementation of mixup differs significantly from the conventional practice employed in most mixup-related works.** Numerous domain adaptation approaches [32, 49, 50], as well as semi-supervised learning methods [31, 33], typically apply mixup during the training stage with a mix ratio close to 1.0 ($\lambda \in \text{Beta}(\alpha, \alpha), \alpha \in [0.1, 0.4]$) for regularization effect, and tend to suffer from performance degradation when the mix ratio is near 0.5 due to manifold intrusion [A, B]. In contrast, our MixVal harnesses mixup during the inference stage to explore the learned target structure. This is executed with a fixed mix ratio close to 0.5. We've highlighted some representative methods employing mixup for domain adaptation [32, 49, 50] and semi-supervised learning [31, 33] in L32. Furthermore, we've conducted validation experiments, specifically with DMRL [32], presenting results in Table 8 and corresponding analysis in L286-L289. The outcomes from the DMRL experiments demonstrate that MixVal offers resilience against attacks that involve mixup during domain adaptation model training. We'll also clarify the distinctions between these methods and our approach in our paper. > **Q3**: DANCE (NeurIPS 2020) is not the SOTA of Open-partial-set UDA. How about the performance of some later work as listed below? [1] Guangrui Li et al. Domain Consensus Clustering for Universal Domain Adaptation. CVPR2021 [2] Kuniaki Saito et al. OVANet: One-vs-All Network for Universal Domain Adaptation. ICCV2021 [3] Liang Chen et al. Evidential Neighborhood Contrastive Learning for Universal Domain Adaptation. AAAI2022 [4] Liang Chen at al. Geometric Anchor Correspondence Mining with Uncertainty Modeling for Universal Domain Adaptation. CVPR2022 **A3**: We greatly appreciate your valuable suggestion. We'll include all of these relevant works in the related section of the open-partial-set UDA. It's worth noting that our submission is the first to tackle the model selection problem in the context of open-partial-set UDA, showcasing the validation method's generalization ability. Our choice of DANCE as the method for open-partial-set UDA serves specific purposes. Notably, DANCE holds a notable status as a standard universal domain method and has also been adopted by SND [27] for closed-set validation experiments. However, we acknowledge that DANCE doesn't represent the state-of-the-art (SOTA) in open-partial-set UDA. **To further address the concern on this, we've included the results of OVANet, as recommended by the reviewer, and have presented results in Table 2 of the attached PDF in the global rebuttal.** Specifically, we conduct the hyperparameter validation for the entropy objective's loss coefficient, spanning a range of values such as $\\{0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1.0, 3.0, 10.0\\}$. OVANet sets the default value as 0.1 by conducting supervised validation on one UDA task. From the results, we observed that MixVal consistently outperforms other target-only validation methods in terms of validation performance. This experiment further highlights the robust generalization ability of MixVal. [A] MixUp as Locally Linear Out-Of-Manifold Regularization, AAAI 2019 [B] On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks, NeurIPS 2019 --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their answers to my questions. Most of my questions have been solved. I choose to keep my rating score. --- Reply to Comment 1.1.1: Title: Thanks for your support Comment: We extend our sincere gratitude to the reviewer for providing prompt and valuable feedback during the discussion phase. We are pleased to learn that our rebuttal has effectively addressed your questions. All of your constructive suggestions will be thoughtfully incorporated into the revised version of our manuscript. Your insightful review has undeniably contributed to improving the quality of our paper, and we genuinely appreciate your dedication and time.
Summary: This paper introduces a simple validation method for unsupervised domain adaption. This method, called MixVal, proposes a new evaluation (ICE) based on MIXUP to select the most appropriate model candidates, which is obtained by training with different hyperparameters, such as loss coefficient/temperature/margin factor. Compared with other target-only validation methods, the paper considered three different inductive biases: neighborhood consistency/low-density separation/no prior label distribution. Strengths: This paper is well-written and easy to follow. Specifically, Table 1 and Figure 1 clearly show the motivation and pipeline, respectively. Experiments over several datasets and settings are impressive which sufficiently show the generalization of the proposed method. Weaknesses: **1. Lack novelty.** Behind the mentioned contributions, there is only one proposed technical point, the proposed evaluation ICE with MIXUP, which just takes less than half a page. It could not be enough as a NeurIPS paper, especially lacking sufficient theoretical support. **2. Lack theoretical support.** The method assumes that the model with a higher ICE score can obtain a higher real target accuracy. This paper needs such theoretical support that there is a positive correlation between the ICE score and the discriminability of the candidate models. This assumption could not be valid in real-world systems. Ideally, the best ICE score is 100%. In this case, the “model” could be a linear system, which is certainly impossible in the applications. This might demonstrate there are some theoretical gaps between the score and the goal of UDA. The authors also recognized this limitation and tried to give some empirical observations in Figure 2(b). However, without the second technical point, this limitation seems very serious. Figure 2(b) looks a little bit tricky, where the first figure is shown with accuracy, and the following subfigures are in the ranked ascending order. In fact, all subfigures should be shown with the same indicator, ranking all model candidates in ascending order. In this case, the empirical conclusion about the high correlation is not convincing. **3. Confusion about formulation.** The formulation of the mixed pseudo label is different from its pseudocode. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: What are the relationships between RankInter and RankIntra? Is that necessary for us to separate such two different types during inference? In Figure 3, AccAvg has a similar trend as RankIntra. RankInter and RankIntra seem not to compensate. In other words, handling the inductive bias neighborhood consistency and low-density separation at the same time does not make sure higher performance. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Please see the weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the constructive comments. We've provided responses to each of your remaining questions below. > **Q1**: Lack novelty. Behind the mentioned contributions, there is only one proposed technical point, the proposed evaluation ICE with MIXUP, which just takes less than half a page. It could not be enough as a NeurIPS paper, especially lacking sufficient theoretical support. **A1**: We appreciate the reviewer's recognition of **MixVal's simplicity and the novelty introduced by our proposed ICE evaluation with Mixup**. However, we **respectfully differ** on the assertion that our submission **lacks novelty due to a single proposed technical point**. Firstly, we'd like to clarify that the **ICE with Mixup encompasses two novel technical aspects** introduced by our submission: **the validation of models via synthesized sample probing and the novel application of mixup during the inference stage** to generate in-between discriminative samples. The simplicity of ICE might overshadow these nontrivial innovations. Beyond the technical facets, our innovation extends to the **novel analysis of high-level inductive bias** inherent in existing target-only validation methods, succinctly summarized in Table 1. Furthermore, **extensive empirical comparisons spanning diverse domain adaptation scenarios** are presented in Tables 2-7. As for the content coverage, our introduction of MixVal actually spans **more than one page, as shown in Section 3.2**. As for the assertion that a submission with less than half a page of technical content may not suffice for NeurIPS, we respectfully differ in opinion. > **Q2**: Lack theoretical support. ... This paper needs such theoretical support that there is a positive correlation between the ICE score and the discriminability of the candidate models. This assumption could not be valid in real-world systems. ... This might demonstrate there are some theoretical gaps between the score and the goal of UDA. **A2**: Although formal theoretical analysis is absent, our **extensive experiments and analysis strongly validate MixVal**. Across various UDA scenarios outlined in Tables 2-7 (including VisDA, DomainNet, source-free UDA, and open-set shifts), MixVal consistently outperforms existing validation methods. This compelling empirical evidence contests the notion that "this assumption could not be valid in real-world systems." Our results affirm MixVal's effectiveness in model selection based on ICE scores, **dispelling any unknown but conjectured theoretical gaps.** > **Q3**: ... some empirical observations in Figure 2(b). ... Figure 2(b) looks a little bit tricky, In this case, the empirical conclusion about the high correlation is not convincing. **A3**: We'd like to clarify that **Figure 2(b) is informative with comprehensive data**. While we can infer the rank from specific accuracies in the first part of Figure 2(b), reversing the process is not feasible. To enhance clarity, we've also included a ranking visualization in **Figure 1 of the PDF. Our observation remains consistent: MixVal exhibits a stronger correlation than the state-of-the-art method SND.** > **Q4**: Confusion about formulation. The formulation of the mixed pseudo label is different from its pseudocode. **A4**: No inconsistency between method and pseudocode. **Mixup formulation is general, while pseudocode offers detailed steps.** > **Q5**: What are the relationships between RankInter and RankIntra? Is that necessary for us to separate such two different types during inference? In Figure 3, AccAvg has a similar trend as RankIntra. RankInter and RankIntra seem not to compensate. In other words, handling the inductive bias neighborhood consistency and low-density separation at the same time does not make sure higher performance. **A5**: To clarify, "RankInter" signifies MixVal with inter-cluster mixup only, and "RankIntra" corresponds to MixVal with intra-cluster mixup only. In our MixVal implementation, we **achieve both types of probing seamlessly in a single-time inference**, as outlined in our "ice_score" function detailed in Appendix A. Regarding Figure 3, it's important to note that AccAvg is not employed in our MixVal methodology. This is because the direct combination of ICE scores from the two probing strategies is unstable, given that intra-cluster probing tends to yield higher ICE scores than inter-cluster probing. The Figure illustrates that AccAvg performs less effectively than RankAvg due to the instability associated with accuracy averaging. In our submission, **we highlighted in L298 that MixVal's combination of both probing strategies bolsters the stability of model selection, substantiated by our experimental outcomes detailed in Tables 2-5. In support of this point, we've provided additional results in Table 1 of the attached PDF in the global rebuttal.** Two conclusions can be drawn from these results: (i) The two-dimensional probing strategy enhances the stability of validation compared to individual probing types. (ii) MixVal distinguishes itself from and significantly surpasses the combination of SND and Entropy. --- Rebuttal Comment 1.1: Title: Confusion about formulation Comment: Thank the authors for the detailed response. However, I am still confused about the formulation of the mixed pseudo-label. In the main paper, the formulation behind Line 187 shows that $y_{mix}$ is a mixing value of pseudo labels of different target samples, where $\lambda$ is the mixing weight. However, in Algorithm 1, ''mix_labels'' is "pl_a" if "$\lambda$>0.5" and "pl_b" otherwise. Could the author provide some explanations about this? --- Reply to Comment 1.1.1: Title: Addressing your remaining confusion Comment: **Thank the reviewer for providing a detailed description of the confusion about the formulation of the mixup (highlighted as weakness#3 in the review and addressed in Q4&A4 of our rebuttal).** We aim to address this confusion with the following elucidation: To ensure clarity in our explanation, we reiterate the pertinent formulations and explanations outlined in our submission. For reference, we reproduce the formulation from Line 187 below. This formulation is commonly employed in mixup-relevant papers to illustrate the mixup operation [30]. $x_{mix} = \lambda * x_t^i + (1 - \lambda) * x_t^j $ $y_{mix} = \lambda * \widehat y_t^i + (1 - \lambda) * \widehat y_t^j$ As explained in Lines 188-189, $\lambda$ is a scalar used for interpolation, $x_t^i$ and $x_t^j$ denote two different target image vectors and $\hat{y}_t^i$ and $\hat{y}_t^j$ denote the corresponding one-hot pseudo label encodings for both images (as specified in Lines 142-143), $\hat{y} \in \mathcal{R}^{K}$, $K$ is the number of categories in the source domain. **The confusion raised by the reviewer pertains to $y_{mix}$.** We agree that "$y_{mix}$ is a mixing value of pseudo labels of different target samples, where $\lambda$ is the mixing weight." Assuming the target sample $i$ and sample $j$ belong to distinct categories $k_i$ and $k_j$ respectively, $y_{mix}$ is presented as a soft label vector $(..., \lambda,..., 1-\lambda, ...)$, with $\lambda$ assigned to the $k_i$-th position and $1-\lambda$ assigned to the $k_j$-th position. In the original mixup framework [30] and its subsequent applications in domain adaptation [32, 49, 50], $y_{mix}$ is directly utilized for model training. In this context, the encoded relationship between different samples contributes to regularizing the model training process. **Differently, our MixVal is exclusively implemented during the inference stage.** Here, we employ $y_{mix}$ to evaluate the inference accuracy of a fixed model on mixed samples. As demonstrated in our ICE formulation (behind Line 194), for each mixed sample $i$, we **compute the accuracy between its interpolated label $y_{mix}^i$ and its predicted one-hot label $\hat{y}_{mix}^i$**, inferred by the model. **For accuracy calculation, only the hard one-hot versions of the interpolated and predicted labels are needed**, focusing on the class with the highest probability. Consequently, for $y_{mix}^i$, we **determine its hard one-hot label by comparing $\lambda$ at the $k_i$-th position and $1-\lambda$ at the $k_j$-th position**, as the probabilities of other categories are zero. **If $\lambda > 1-\lambda$, i.e., $\lambda > 0.5$**, the one-hot label is attributed to the $k_i$-th category. Conversely, **if $\lambda \leq 1-\lambda$, i.e., $\lambda \leq 0.5$**, the one-hot label corresponds to the $k_j$-th category. **This explains why our pseudocode in Algorithm 1 uses an "if-else statement" that compares $\lambda$ and $0.5$ to determine the hard one-hot label of the interpolated label $y_{mix}$.** In summary, we would like to emphasize that the formulation from Line 187 and the ICE formulation behind Line 194 are **completely consistent** with the " 'mix_labels' is 'pl_a' if '0.5' and 'pl_b' otherwise" pseudocodes in Algorithm 1. These pseudocodes precisely reflect our implementation of MixVal. We sincerely hope that the above explanation effectively addresses your confusion. We welcome any further discussions and value your input.
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewers for their valuable time and feedback. Particularly, we are grateful for the following recognitions: - Our work investigates a **crucial** [Reviewer PTRC] problem which is **well-defined and well-motivated in practice.** [Reviewer zt6b]. - Our method MixVal is **motivated** [Reviewer G1xN], **novel** [Reviewer 4S59], **simple** [Reviewer zt6b], **effective** [Reviewer PTRC, Reviewer zt6b], and **robust** [Reviewer G1xN, Reviewer PTRC, Reviewer zt6b]. - Our experiments are **impressive** [Reviewer G1xN], **extensive** [Reviewer zt6b] and **state-of-the-art** [Reviewer 4S59] to demonstrate the effectiveness of our method. - Our work has good soundness [Reviewer 4S59, Reviewer PTRC, Reviewer zt6b], good presentation [Reviewer G1xN, Reviewer PTRC, Reviewer zt6b], and excellent contribution [Reviewer zt6b]. Pdf: /pdf/4e27585b5fd3e91f48b9ef8a438ecb83e631cfa1.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
MarioGPT: Open-Ended Text2Level Generation through Large Language Models
Accept (poster)
Summary: This paper proposes MarioGPT with novelty search, a method that can generate new Super Mario Bros levels. MarioGPT is finetuned from GPT-2 to generate levels. The novelty search is used to get novel levels by randomly selecting a generated level, mutating it, and then filtering it based on a novelty criterion: the mean distance between the mutated level and the K closest elements from the existing levels (K-means). Experimental results show that MarioGPT has a better level of reconstruction accuracy than LSTM, and 88.33% of the generated levels can be played by Robin Baumgarten’s A* agent. Strengths: 1. The idea of generating Super Mario Bros levels is interesting and might be helpful for both the reinforcement learning community and the game community. 2. The result shows that the finetuned GPT has the ability to generate decent Super Mario Bros levels, which is insightful. Weaknesses: 1. The evaluation criteria do not seem to be appropriate. For example, "playable by an A* agent" is not a convincing indicator that can measure the quality of generated levels. Actually, "playable by an A* agent" primarily measures the simplicity of the generated levels rather than the quality of the levels. Moreover, in section 4.4, "Guided Level Generation via Prompting", the "empirical evaluation" is not objective, and lacks baseline (like LSTM) comparisons. Since this paper claims that the generated levels are text-controllable, it would be better to provide a more explicit demonstration of controllability. 2. The baseline (LSTM) in section 4.1 is weak. Table 1 shows that the pre-trained GPT-2 is very important in the generation process. Then, how does a GPT-2 without finetune perform? How does GPT-3/3.5/4 perform? Also, I doubt the conclusion that LSTM is not promptable since it can be integrated with a text encoder. 3. Although the idea is interesting, the generated levels are simple. 88.33% of the generated levels can be played by an A* agent, and only horizontal movements are included (Mario cannot crawl into the pipes and there is no hidden brick, making the generated levels even simpler than traditional Super Mario Level 1-1). This greatly reduces the significance of this work, both in the fields of artificial intelligence and gaming industry. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How to control detailed level settings like the initial moving direction of the enemy? 2. Can MarioGPT be tuned on Super Mario Makers which consists of thousands of novel levels contributed by the gaming community? 3. Why do from-scratch-MarioGPT and adapter-MarioGPT perform worse than LSTM? 4. Mario GPT generates trajectories as well as game contexts. What about generating trajectories first, then selecting a trajectory and keeping it unchanged, and then generating, mutating, and filtering the game contexts? Will this improve the “playable rate” of the generated games? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See above questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback! **Addressing the qs from the reviewer** > How to control detailed level settings like the initial moving direction of the enemy? Because of the limitations of the current dataset and its labels, we do not know the initial direction of the enemy. However, our system can scale with more diverse prompts and we plan to explore that front in future work, touching upon more semantic and visual descriptions of levels. > Can MarioGPT be tuned on Super Mario Makers which consists of thousands of novel levels contributed by the gaming community? GPT2 has been shown to be a very scalable architecture and can digest large amounts of diverse data. Given this, we believe that our architecture can scale with increasingly diverse levels (such as those generated by Super Mario Maker). We hope to explore this more in future work. > Why do from-scratch-MarioGPT and adapter-MarioGPT perform worse than LSTM? We hypothesize that the parameter space of MarioGPT-from-scratch is quite large and makes it harder to optimize. That being said, we simply utilized the same hyperparameters presented in the DistilGPT2 work: https://tinyurl.com/distillcode, so there may be some improvements to be made with a more optimized hyperparameter search and longer training. Tangentially, this highlights one benefit of finetuning, as there was little to no hyper parameter tuning. > What about generating trajectories first...? I’s definitely possible to mask out all other parts of the level and keep the path tiles, allowing MarioGPT to inpaint the level around the desired path. To use this, we would probably need to train a separate network to just generate “playable” paths, which could perform the same (or worse) than the current system. That being said, this inpainting ability can still be very useful for path customization; we plan to look more into this in future work. **Addressing the weaknesses pointed out by the reviewer** > ..."playable by an A* agent" is not a convincing indicator that can measure the quality of generated levels... For procedural content generation of Super Mario Levels (and many other tile-based games), A* is in fact the standard when it comes to measuring playability (see references [1-5]) The reason for choosing Robin Baumgarten’s A* agent for measuring playability comes from its performance on the 2009 Mario AI competition, where it beat handcrafted controllers and even simple evolved neural networks on getting the furthest in an infinite-level setting, as well as solving a corpus of levels [6]. To summarize, A* is able to solve difficult Mario levels beyond human performance (we refer the interested reader to a video of the A* agent in question: https://tinyurl.com/astaragent). [1] Volz, Vanessa, et al. "Evolving mario levels in the latent space of a deep convolutional generative adversarial network." Proceedings of the genetic and evolutionary computation conference. 2018. [2] Awiszus, Maren, Frederik Schubert, and Bodo Rosenhahn. "TOAD-GAN: Coherent style level generation from a single example." [3] Schrum, Jacob, Vanessa Volz, and Sebastian Risi. "Cppn2gan: Combining compositional pattern producing networks and gans for large-scale pattern generation." [4] Volz, Vanessa, et al. "Tools for Landscape Analysis of Optimisation Problems in Procedural Content Generation for Games." [5] Edwards, Maria, Ming Jiang, and Julian Togelius. "Search-based exploration and diagnosis of TOAD-GAN." [6] Togelius, Julian, Sergey Karakovskiy, and Robin Baumgarten. "The 2009 mario ai competition." > ...It would be better to provide a more explicit demonstration of controllability We compared levels that are prompted without a “pipes” description to those that specify either “little”, “some”, “many” pipes (see Figure 1 of rebuttal pdf). We found that without specifying pipes, the distribution of the number of pipes that MarioGPT generates is scattered, while specifying keyword descriptions results in distributions that match. This shows that prompt-conditioning has a significant effect on level generation. We saw similar behavior for the other tile categories as well. > ...how does a GPT-2 without finetune and how does GPT-3/3.5/4 perform? GPT2 and GPT3.5 cannot generate legitimate mario levels when prompted, even when giving few shot examples and tile descriptions. This is because GPT2 turns out to be not powerful enough for complex in-context learning, and GPT3.5 seems to hallucinate and output random tiles that do not correspond to any prompt the user has given. The reviewer pointed out a great point about the LSTM: The LSTM is not quite promptable in the same way MarioGPT is, (there’s no real attention mechanism), but we have included a simple way to prompt (by appending the prompt embeddings to input). Even with this, the LSTM ends up only generating ~31% playable levels. > Although the idea is interesting, the generated levels are simple.. With regards to the simplicity of levels (reviewer pointed out that there are no pipe interactions + hidden bricks): Some of the generated levels are still quite difficult for the A* agent, which only clears about 87% of the original Super Mario Bros and Super Mario Bros 2. The generated levels are also quite complex for human players, some requiring finely timed jumps and sharp increases / decreases in elevation. The generated levels, especially when combined with Novelty Search, are in fact often quite complex (see Figure 8). Hidden bricks and similar elements could be added easily by adding denoting characters to the training levels, upon which the model would learn an appropriate token, which can then also be implemented in the simulator parsing the MarioGPT output. In general, as long as complex mechanisms can be converted to tokens and reflected in simulation, MarioGPT should be capable of generating them. --- Rebuttal Comment 1.1: Comment: We hope we were able to address the concerns of Reviewer an9Y? We believe we addressed the main criticism of (1) using A* as an evaluation metric, and (2) evaluating against other baselines. If there are any other questions or issues, please let us know. --- Rebuttal 2: Title: Thanks for the rebuttal Comment: Thanks for the rebuttal and response to my questions. Most of my concerns have been addressed, especially the comparison baselines. I will raise my score to 6.
Summary: This paper proposes a novel idea of using LLMs to perform Procedural Content Generation. It addresses several challenges including the diversity of the generated environments, the existence of a feasible solution (playability) and generating with language guidance. It proposes two key algorithms. 1. MarioGPT - Fine-tuning on DistilGPT2 so that it can predict the next slice of the game. It also allows prompting - the text conditioning is done by using BART to encode text prompts then feeding the average hidden states into the cross attention. 2. Novelty search for open-endedness - Using MarioGPT as a mutation operator in the novelty search evolutionary loop, prompting the MarioGPT with random prompts and generate candidates, and retain the candidates that pass the novelty criteria to add to the archive. Then do masked token prediction (impainting) to ensure that the level generated has a feasible path solution. This paper conducts experiments that demonstrate (1) the high token prediction accuracy of MarioGPT, (2) good playability - i.e. most generated levels have feasible path solutions, (3) using a suitable temperature to control the trade-off between diversity and quality, (4) the accuracy in following the prompt and (5) the diversity of generated levels (in terms of the predicted path) achieved by novelty search. Strengths: 1. This paper proposes a novel framework for text-guided PCG: having a text-conditioning, generative model that predicts new levels, then having a novelty search algorithm reject the levels that are too similar to existing ones. 1. This paper is written clearly. The methods are well motivated. The experiments are well-designed with detailed analysis and discussion. Weaknesses: 1. Experiments are lacking in measuring “controllability”. The prompts are limited to a small set. The capability of following instructions on a more semantic level are not measured, such as the scene layout. and the spatial, functional, and semantic relationships between objects. (e.g. how tall the pipes are, “there is an enemy to the right of a pipe”). 2. The technique of first auto-regressively generating a scene, then using “inpainting” to ensure a feasible solution seems to be pretty tied to this specific type of games where the scene moves linearly in 1 dimension (i.e. the character can only move forward, not backward), and hard to generalize to other PGC scenarios. For example, in order to generalize to the cases where the character can move both forward and backward, non-trivial changes are needed. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. In section 4.4 "Guided Level Generation via Prompting", under what sampling temperature are these results measured? I am curious how / whether this randomness affects how accurately the model follows the prompts as well. The authors mentioned that randomness helps with diversity - does this sacrifice accuracy of following prompts? 2. On a relevant note, in section 4.3 the authors mentioned that the temperature controls the tradeoff between diversity and quality. Could you elaborate what the definition of "quality" is? 3. In section 4.5, at what point does the “diversity” start to plateau? Are there baseline studies regarding the diversity of generated paths? 4. In section 4.2, are there quantitative baselines regarding the playability of generated environments? Could you share intuition regarding the trade-offs between playability versus diversity of the generated environments? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: As mentioned above, (1) the measurement of capability to understand the instructions semantically and (2) the generalizability of this approach beyond this type of games remains to be further studied. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the great feedback! **Addressing the questions from the reviewer** > In section 4.4 "Guided Level Generation via Prompting", under what sampling temperature are these results measured? ... The authors mentioned that randomness helps with diversity - does this sacrifice accuracy of following prompts? On a relevant note, in section 4.3 the authors mentioned that the temperature controls the tradeoff between diversity and quality. Could you elaborate what the definition of "quality" is? These are some great points brought up by the reviewer. Large Language Models, especially those that are trained on smaller datasets, like the set of levels we used in our work, suffer from overfitting and even “memorization”. This can be seen in Figure 7, where a temperature of 1.0 effectively results in the model spitting out a level (or slice of a level) that exists in the dataset. To alleviate this, temperature can be used as a toggle-able hyperparameter to control the degree of randomness or uncertainty in predictions. This effectively changes the skewed distribution of token predictions (such as a very large logit value for a single token), to a more smoothed distribution. This has an important tradeoff – more randomness introduces more diversity in tokens at the cost of “tile accuracy”, which means that increasing temperature can reduce the impact of the prompt and the effect of previous contexts. This is what we mean by “quality diversity” tradeoff, where "quality" here indicates how "natural" the level is, or how similar are its characteristics to the original levels. In section 4.4, we utilize a common temperature value of 1.4. We will make sure to clarify this in the paper. > In section 4.5, at what point does the “diversity” start to plateau? We found that the diversity starts to plateau after around 350-400 generations. However, this is very sensitive to the behavior characteristic (the smoothed predicted path of the level), so it may be different for other behavior characteristics. > Are there baseline studies regarding the diversity of generated paths? With regards to baseline studies of diversity of generated paths, we include a t-SNE plot of levels generated by novelty search along with levels that are generated from randomly sampled prompts (Figure 2 in rebuttal pdf). The spread of levels generated by novelty search is much larger than the levels generated by sampling, indicating that novelty search results in a more diverse set of generated levels. > In section 4.2, are there quantitative baselines regarding the playability of generated environments? Could you share intuition regarding the trade-offs between playability versus diversity of the generated environments? We found that the best baseline, the LSTM, achieves around ~31% solvable levels, suggesting that MarioGPT is much better at generating solvable levels (achieves ~88%). With regards to the trade-off between diversity of generated levels (from novelty search) and playability, we plotted TSNE embeddings (in Figure 3 of the rebuttal pdf) and labeled “green” for solvable and “red” for unsolvable. We can see that the unsolvable levels are spread across the space without a discernible pattern, indicating that there isn’t really a relation between diversity of levels and playability. This is expected, because our mutations consist of just resampling portions of the level (with a random prompt) and stitching it together with the inpainting model. These models both learn how to generate valid and playable levels, so the mutated portions follow the same. **Addressing the weaknesses pointed out by the reviewer** > Experiments are lacking in measuring “controllability”. ... The capability of following instructions on a more semantic level are not measured, such as the scene layout. and the spatial, functional, and semantic relationships between objects. (e.g. how tall the pipes are, “there is an enemy to the right of a pipe”). This is a great point! For evaluating controllability, we included a new experiment (detailed in Figure 1 of the rebuttal pdf), where we compare random prompts excluding pipe descriptions, to those with pipe descriptions. The distribution of number of pipes for levels where pipes are excluded in the prompts is very spread out, while the other ones have peaks corresponding to the description. This indicates that prompting does have an effect, and it's possible to control the generation. Our prompts are currently simple (only dealing with counts of objects), but we still see that they can guide the model towards generating a diverse set of levels. Additionally, given how language models are able to scale well with richer data, we believe that our system can scale to many more detailed prompts like the ones mentioned by the reviewer. For future work, we not only want to explore more of these semantic descriptions mentioned by the reviewer, but also visual descriptions of the level. This is an exciting direction and we hope to explore more in the future! > The technique of first auto-regressively generating a scene, then using “inpainting” to ensure a feasible solution seems to be pretty tied to this specific type of games where the scene moves linearly in 1 dimension (i.e. the character can only move forward, not backward), and hard to generalize to other PGC scenarios... In regards to the reviewer’s concern about the generality of the inpainting method: Because both the tile prediction model and the inpainting model work on character level predictions, they are actually robust to any orientation. For instance, for a game like Sokoban, where movements can be in any direction, the level is still composed of tiles that can be passed into the LLM one by one. Figuring out the "borders" after resampling a portion of the level and inpainting the surrounding tiles also does not require much change in the system. --- Rebuttal Comment 1.1: Title: Change contribution from Fair to Good Comment: Thank you for the rebuttal and detailed, helpful responses to my questions. I am keeping the rating, and raising the contribution from 2.Fair to 3.Good, because the added LSTM baseline highlighted your new models' capabilities of generating playable levels. I look forward to your future work on controlling the generated environment with more sophisticated prompts. Thank you again! :)
Summary: In this work the authors fine tune a DistilGPT2 model to produce diverse, largely playable Mario levels. They incorporate a novelty search to favor diversity in levels, and also explore conditioning level generation on natural language user prompts. Their novelty search proceeds as follows: Given an initial set of levels, a level is sampled from the set and randomly mutated. The mutation involves deleting a random contiguous segment of the level and sampling a new one in its place from their model, then using a fine tuned BERT model to do inpainting on the border regions to smooth the transition between this and the rest of the level. The mutated level is accepted if it is sufficiently different from the other levels, using a distance metric comparing the player path through the level. Strengths: - The use of LLMs for game level generation certainly makes sense, and I think it's valuable to publish work in this area for the NeurIPS community to see and build on. The technical approach taken here is original, to my knowledge. - The combination of genetic algorithms with LLMs is interesting and novel, and I found the use of BERT for inpainting to smooth out the transition between the mutated segment and the rest of the level to be a particular clever approach. - The paper also does a great job of visualizing the levels produced by the model which is very helpful in getting a feel for the sorts of outputs that their model produces. The playability and temperature analyses were quite interesting as well. Weaknesses: - Section 4.5: It's hard to tell that the novelty search is actually helping, without a comparison to an ablated version that doesn't do the novelty search. In particular it's hard to tell if the darker points in Figure 9 seem to be filling in the space in between because of the novelty search, or if it's just that there are more sampled points so they happen to fill in some of the space in between the earlier points. Likewise with the rightmost graphs in figure 7 looking denser than the leftmost ones – taking more samples will always add some density, so in order to really conclude something you need a comparison to an ablation. Showing something like 300 levels from the novelty search plotted on the same tSNE as 300 levels from a version without novelty search (eg as a different color) would make it much clearer that there's actually an improvement in diversity (and likewise some analogous experiment for the other figure). * In a similar vein, in Table 3 it's hard to know how much the prompt is helping without seeing the scores without prompting (for an idea of what scores a fairly random approach would achieve); though I think is a less major issue. * After seeing the LSTM from prior work used in the Table 1 comparison, I was surprised that LSTM comparisons weren't given for any of the future evaluations, which would be helpful for understanding how much better this LLM setup is than prior work. For example, we learn that 88% of levels are solvable, but how does this compare to the LSTM approach? Likewise in the diversity analysis, a naive LSTM baseline would be interesting (but no need for all the novelty search machinery). This would just help me get a sense of where this work stands compared to prior work on these metrics that are being measured. That said, I wouldn't reject this paper on that basis – 88% playable is a great statistic regardless of what you compare it to, and I don't actually expect that the LSTM would make particularly good levels, but having the full comparison would strengthen the evaluation. - As a last, minor point, It's surprising (in a way that makes me feel that an explanation is needed) that such poor performance is obtained by from-scratch-MarioGPT on the tile prediction task given it was trained on 200,000 datapoints. The "tile accuracy" is presumably a measure of how well the system does next-tile prediction (analogous to next-token prediction in language modeling, and presumably this is the objective that all these models are trained on). I'd imagine something as simple as "always predict an air tile" since most levels look like they're more than half air, or "predict the same tile that you saw to the left" would do quite well, and the inability to pick up on even simple trends like this after 200,000 datapoints is surprising to me. Perhaps there is additional complexity I'm missing? Overall, this is an interesting paper that I end up borderline reject on because of the evaluation points mentioned above, but I don't have a huge amount of prior experience in this area or what typical evaluations are for PCG, so I am open to revising through discussion. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Relating to the weakness point above, in Table 1 I'm a little surprised that everything except MarioGPT does so poorly (LSTM does okay at least though). It'd be helpful to have a little intuition for why this is so hard? - The concurrent work cited by this paper (Todd et al 2023) finds that pretraining (vs from-scratch) GPT2 has little to no impact on a range of metrics. That is of course a somewhat different task and setup, but I'm curious if you have thoughts on why you seem to find the opposite result in this domain? - What embedding is used for the tSNE? Is it something like the last layer of MarioGPT? - Do you guarantee that each tile is parsed as a separate token? Without this it seems like the network could potentially get confused about reasoning about lengths/distances, like if one token represented two blocks while another represented just one. Seems like it works either way, but I'm curious if this is something you're doing or might look at. - The language conditioning is cool! Of course, a natural extension of this would be moving towards level generation from more unstructured natural language. For example, flexibility with synonyms to "many" or more complex things like "a series of narrow pillars with an enemy on top of each one". That sounds like enough for a separate paper but I'm personally curious if you've done any preliminary work in that direction or what you think of it. To be clear, I don't think this is needed in this submission I just think that the work is cool and am curious! Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I think that this paper does an adequate job of presenting its limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First off, we thank the reviewer for the great and thoughtful feedback! **To answer the reviewer’s open questions:** > 1. "in Table 1 I'm a little surprised that everything except MarioGPT does so poorly (LSTM does okay at least though). It'd be helpful to have a little intuition for why this is so hard?” We hypothesize that the parameter space of MarioGPT-from-scratch is quite large and makes it harder to optimize. That being said, we simply utilized the same hyperparameters presented in the DistilGPT2 work: https://github.com/huggingface/transformers/blob/main/examples/research_projects/distillation/train.py, so there may be some improvements to be made with a more optimized hyperparameter search. We also hypothesize that extending the training process may also help. The LSTM seems to have an easier time, possibly because of the smaller parameter space. > 2. “The concurrent work cited by this paper (Todd et al 2023) finds that pretraining (vs from-scratch) GPT2 has little to no impact on a range of metrics. That is of course a somewhat different task and setup, but I'm curious if you have thoughts on why you seem to find the opposite result in this domain?” We hypothesize that this is because of some notable differences to our work and the approach in the work by Todd et al (2023). In Todd et al. the text prompts and the generation of the level are done together (as opposed to using a frozen text encoder like our approach), which may have had an effect on performance. In addition, generating Sokoban levels could pose other challenges compared generating Super Mario Bros levels, which we will also explore in more detail in future work. > 3. “What embedding is used for the tSNE? Is it something like the last layer of MarioGPT?” The embedding space is actually the smoothed moving average of the predicted path, which is essentially a time series (can be seen in Figure 4). We will make this clearer in the paper. > 4. “Do you guarantee that each tile is parsed as a separate token?” This is a great point! We verified this and each tile is indeed parsed as a separate token. We will make sure to add this detail in the paper. > 5. ...a natural extension of this would be moving towards level generation from more unstructured natural language. For example, flexibility with synonyms to "many" or more complex things like "a series of narrow pillars with an enemy on top of each one". We did some preliminary experiments that show it is possible to use synonyms for words. For example, changing “many” to “a lot” or “a ton”, produces similar results. For more complex ones like the one mentioned by the reviewer ("a series of narrow pillars with an enemy on top of each one”), we wouldn’t expect the currently fined-tuned MarioGPT to work well. However, the approach should be able to generalize to such more complex prompts by keeping the MarioGPT architecture the same but adding diversity the annotated training set. Currently there’s no easy way to add “visual” descriptions of levels yet, but there are many exciting approaches that we hope to explore in future work, like utilizing visual language understanding models. **Addressing the weaknesses pointed out by the reviewer:** > “Section 4.5: It's hard to tell that the novelty search is actually helping, without a comparison to an ablated version that doesn't do the novelty search. ” This was a very valuable comment and as suggested, we now created an archive by generating levels using randomly sampled prompts (as shown in green in Figure 2; see PDF attached to the rebuttal summary at the top). We combined the novelty search archive and the sampled archive and ran t-SNE on the combined archive. Visually, we can see that random sampling actually results in a decently diverse set of levels (indicated by the spread of points in t-SNE space). However, our novelty search approach clearly results in a much more diverse set of levels, as can be seen by the spread of points shown in orange. > “... in Table 3 it's hard to know how much the prompt is helping without seeing the scores without prompting” This was a very valid point and we have now investigated the effect of prompt conditioning in more detail, which is shown in Figure 1 in the PDF attached to the main rebuttal summary. In this figure, we compared the distribution of the number of pipes between levels generated with random prompts without pipes-related commands to random prompts with pipes-related commands The results demonstrate that the prompt-conditioning has a significant effect on the tile distribution. We see similar results for the other tile types. > “... we learn that 88% of levels are solvable, but how does this compare to the LSTM approach?” Running the same evaluation setup as MarioGPT (over 250 randomly sampled prompts), we found that the LSTM generates playable levels ~31% of the time. > “As a last, minor point, It's surprising (in a way that makes me feel that an explanation is needed) that such poor performance is obtained by from-scratch-MarioGPT on the tile prediction task given it was trained on 200,000 datapoints. ” To clarify, tile accuracy here refers to non-air tiles. As pointed out by the reviewer, a majority of tiles are air tiles, and each model ends up heavily predicting air tiles, so we focus primarily on their ability to predict non-air tiles. However, there may be better metrics to measure time predictions here, namely those that can deal with in-balanced class datasets. As mentioned in the response to the 1st question above, MarioGPT-from-scratch performance can probably be improved with more time and optimized hyperparameter search. However, as a tangential point, this shows a major benefit of fine-tuning pretrained models, which seem to require much less effort in regards to hyperparameters. We will clarify these points in the in the paper. --- Rebuttal Comment 1.1: Title: Changed Rating Comment: Thank you for the extensive response and additional experiments/visualizations, I really appreciate this rebuttal. The addition of the first and third figures in the PDF that the authors attached make me much more convinced of the impact of the text conditioning and the novelty search. The additional baseline of LSTMs on the solvability objective is also helpful. My primary concerns in my initial review were around the soundness/thoroughness of the evaluation (baselines and ablations) and I feel the authors have dramatically improved that aspect of the paper, in addition to addressing my other thoughts. I believe that this would be a valuable paper for a NeurIPS audience, and have adjusted my rating to recommend Accept.
Summary: This submission proposes MarioGPT, which aims to generate diverse-and-playable Mario levels with LLM (GPT-2) through language prompts. The input and output of MarioGPT are level representations, not natural language. Human prompts are involved by incorporating the cross-attention layer into the LLM. A Novelty Search algorithm is further proposed to increase the diversity of generated levels. Strengths: + Leveraging LLM into Procedural Content Generation (PCG) is a natural idea. This submission successfully accomplishes this, and the proposed MarioGPT could be really usable. + This submission conducts a comprehensive study of various aspects of the proposed method, including the tile/path prediction accuracy, the playability of generated levels, whether MarioGPT is memorizing the training dataset, guidability by prompts. The overall experimental parts are convincing. The authors also analyze some current limitations and provide feasible solutions to them. + The proposed Novelty Search algorithm successfully improves the diversity of agent paths. Weaknesses: - Overall, I think Procedural Content Generation (PCG) with LLM is 'quite' an obvious idea, making me doubt this submission's 'technical' contribution to the community. I did not see any other obvious weaknesses. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. In the field of PCG, is there any game other than Super Mario? What is the generalization ability of the proposed method to other games (if there is any)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Same as in the Questions part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful and useful comments! In terms of PCG applied to other games, while Super Mario Bros. is often used as a benchmark, there are certainly a variety of other games our approach could be applied to in the future. Another common PCG-benchmark are dungeon-like games such as the classic video game “The Legend of Zelda” (e.g. Khalifa et al. [1]). Our approach should generalize to such dungeon-like games as well because MarioGPT processes levels by flattening tile-based levels into long vectors, which is agnostic to content that can be represented as a string. Additionally, concurrently to our work, Todd et al. [2] showed that LLMs can also be used to generate levels for other games such as Sokoban, which indicates the generality of the approach. Future work could also focus on other types of content, usually represented as graphs or images (e.g. Doom levels [3] or StarCraft maps [4]). However, these non-string representation would require more substantial changes to the MarioGPT architecture. [1] Khalifa, A., Perez-Liebana, D., Lucas, S. M., & Togelius, J. (2016, July). General video game level generation. In Proceedings of the Genetic and Evolutionary Computation Conference 2016 (pp. 253-259). [2] Todd, G., Earle, S., Nasir, M. U., Green, M. C., & Togelius, J. (2023, April). Level Generation Through Large Language Models. In Proceedings of the 18th International Conference on the Foundations of Digital Games (pp. 1-8). [3] E. Giacomello, P. L. Lanzi, and D. Loiacono, “DOOM Level Generation using Generative Adversarial Networks.” arXiv, Apr. 24, 2018. [Online]. Available: http://arxiv.org/abs/1804.09154 [4] Togelius, J., Preuss, M., Beume, N., Wessing, S., Hagelbäck, J., & Yannakakis, G. N. (2010). Multiobjective Exploration of the StarCraft Map Space. 2010 IEEE Symposium on Computational Intelligence and Games (CIG), 265–272. https://doi.org/10.1109/ITW.2010.5593346 --- Rebuttal Comment 1.1: Title: Official Comments by Reviewer 6gGD Comment: I appreciate the rebuttal, and the paper has addressed my questions and concerns. I will keep my rating 7 and hope the final version can contribute to the development of the field.
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their insightful comments, which helped us to significantly improve our paper. The most important new additions are: (1) a comparison to an ablated version without novelty search, which shows that novelty search is indeed crucial in allowing the discovery of a larger space of levels than random sampling; (2) we now also compare our approach to a prompt-able LSTM-baseline in terms of level playability and find that an LSTM-based approach generates playable levels around 31% of the time, which is significantly lower than the 88% for MarioGPT; (3) we clarify that A* is the current standard to evaluate procedurally generated levels and that solvable by A* does not adequate to levels being simple; (4) we added a controllability test to examine how much the text influences the generated levels, which shows that MarioGPT is indeed steerable through text prompts; and (5) We evaluated the playability of levels generated by novelty search, showing that the majority are solvable and there is no discernible tradeoff between path diversity and the ability to generate solvable levels. The new figures related to these additions can be found in the attached PDF. Pdf: /pdf/8d62b0466491288750c256636705f38102576143.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Anchor Data Augmentation
Accept (poster)
Summary: The authors proposed a new data augmentation algorithm for regression datasets. They borrow ideas from Anchor regression model which captures the heterogeneity of the dataset using anchor variables to create additional augmented data. They show that applying this type of augmentation improves results compared to naive mixup augmentation on many datasets. Strengths: - Well-written paper. - The augmentation proposed is theoretically motivated. - Evaluated on many datasets and scenarios (ID, OOD datasets etc). Weaknesses: - The results are not significantly better than the baselines in low and high data regimes. - The theoretical motivation from Anchor Regression is not clearly explained or justified. - The comparison is limited only to C-Mixup as a prior method, could be broader. - More analysis could be provided on why and how ADA improves robustness. - (As far as I understand) ADA has more hyperparameters to tune compared to some other simpler data augmentation techniques. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Will your model generalize to datasets with only categorical variables? Do you have ablations on different values of "k" and "alpha"? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your direct review and questions about our procedure. We will address them in the following paragraphs. Data augmentation for regression is hard. As we mentioned in our paper, there are two papers on the topic. The results sometimes are much better than the baseline, but in most cases, they provide modest improvements. Given the effectiveness of data augmentation in classification, we should explore the potential solutions for regression. In the final version of the paper, we will expand on how a method for robust regression method that builds on the causality literature is reinterpreted as a data augmentation strategy. The overall goal is to get better regressors. ADA relies on AR to augment data samples. As in AR, it uses a linear projection to select perturbation directions that are considered more relevant based on the similarity in the batches. Hence, it inherits the generalization properties and prediction robustness (concerning those directions) from AR. Combining the original data with perturbed augmentations in the prescribed directions leads to improved predictions on the mixture of distributions, and they improve generalization. In Appendix Section A.2, we provide further explanations and figures to illustrate how ADA augments the samples and how each parameter influences the augmented samples. By constructing the Anchor matrix A from a one-hot encoding, ADA “mixes” the samples which fall into the same clusters, and the augmented samples are either moved towards or away from the centroid based on the value of $\gamma$. We control the range of values for $\gamma$ via the hyperparameter $\alpha$. We made a comparison to the methods proposed in the C-MixUp paper. We also compare ADA with the original MixUp and Mani-MixUp. Additionally, we have added the comparison to the Local MixUp in the attached PDF. Also, the other methods are for classification problems, and their application to regression is not straightforward. Our proposed method has three hyperparameters: the number of clusters $k$, $\alpha$, and whether to use manifold augmentation. For reference, C-Mixup also has three hyperparameters: bandwidth, $\alpha$ in Beta distribution, and whether to use manifold mixing. Categorical variables are treated as reals, which makes them meaningless. But small values of gamma might still provide robustness to the solutions. We have not tried to use categorical variables only and wouldn’t expect improvements. We show how the results vary with $\alpha$ and $k$ for three representative datasets (see the attached PDF). The value of $k$ does not affect the solution significantly, and large $\alpha$ penalized the results. --- Rebuttal Comment 1.1: Title: Thank you for the response. Comment: Thanks again for addressing my questions. I will update the score.
Summary: The paper proposes a new data augmentation method called Anchor Data Augmentation (ADA) that can improve the performance of machine learning classifiers, especially in over-parameterized settings. Novelty: The work presents the Anchor Data Augmentation (ADA) which is based on the concept of Anchor Regression. This is a unique approach, but its departure from existing methods is somewhat moderate. The main novelty lies in its application to machine learning classifiers using the concept of linear regression model fitting. However, one could argue that using regression for data augmentation is not entirely groundbreaking. Significance: The significance of the paper is somewhat moderate. The idea of using Anchor Regression to improve machine learning classifiers is promising, especially when it comes to over-parameterized settings. However, the paper seems to be more of a proof-of-concept, as it primarily focuses on housing datasets, leaving other domains unexplored. The lack of significant improvement over other existing methods also somewhat diminishes its overall impact. Soundness: The methodology presented is fairly sound, with ADA being applied to various types of regression models. However, there are concerns related to the many hyperparameters introduced and the lack of study on their impact, which could be a potential flaw in the work. Additionally, The paper performs a replication-like operation on the data, so isn't there more homogeneous data? The paper does not seem to address potential issues of overfitting in the presence of homogenous data, which is a common concern in machine learning. Clarity: The paper is written in a comprehensible manner. It provides a clear explanation of the proposed ADA method and its underlying concept of Anchor Regression. The experiments are also described clearly, though there is room for improvement in detailing the parameter settings and choices made during experimentation. It might benefit from restructuring certain sections to make the presentation even clearer, especially when discussing the results. Literature: The paper discusses some existing works, mainly focusing on regression models. However, it would have been beneficial if it drew more comparisons with other data augmentation methods and their results. It feels like there could be a more comprehensive discussion on how ADA fits in the broader landscape of data augmentation techniques, especially in terms of strengths and weaknesses. Conclusion: After taking the Area Chair's feedback into consideration, the paper's novelty is evident, but not groundbreaking. Its significance is moderate, given the specific domain application and the moderate improvement over existing methods. Soundness has potential issues due to hyperparameters and the risk of overfitting. The clarity is fairly good but could be improved upon, and there is room for a more in-depth literature review. Based on these updates, my recommendation change to 'Borderline accept' at this time. Strengths: ADA is based on the concept of Anchor Regression, which involves fitting a linear regression model to a subset of the training data and using the model to predict the target variable for the remaining data. ADA can be applied to various types of regression models, including neural networks, and can be used to improve predictions in the low data regime. ADA can be applied to real-world datasets, such as the California and Boston Housing datasets, and can provide improved performance as the number of samples is increased. Weaknesses: The paper includes many hyperparameters, but does not study the impact of different hyperparameters on the performance of ADA or develop methods for automatically tuning them. While proposing the ADA method, the paper does not further consider new data augmentation methods that combine ADA with other techniques such as generative adversarial networks or transfer learning. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: How does ADA compare to other data augmentation methods in terms of computational efficiency? Have you considered applying ADA to other types of classifiers beyond regression models, such as decision trees or support vector machines? Have you evaluate the robustness of ADA to different types of data distributions and noise levels? Can it be used for augmentation performance testing on other visual datasets or point cloud datasets? Why are the hyperparameters set to a uniform distribution in the experiments, and although it's claimed that this is also effective for a gamma distribution, why are no experimental results provided? Is this also effective for other distributions such as chi-square, Rayleigh, sub-Gaussian, and Super-Gaussian ? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The paper only explores the application of ADA to housing datasets and does not investigate its use in other domains such as healthcare or finance. Compared to other methods, the results of this paper's approach did not show significant improvements. The proposed method might struggle to address the issue of overfitting that tends to occur in scenarios with an abundance of homogenous data. This is an extremely heuristic paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive comments and discussion about scalability and intuition on hyperparameter selection. The standard deviations of RMSE and MAPE results are in Tables 1 and 2 in Appendix B.3. The performance of ADA is indeed dependent on the choice of the anchor matrix A. As with AR, we can choose the anchor matrix based on expert opinion about “plausible” or “importance” shifts (invariances) in the distribution. If not available, we can infer that information from the data. One way to select the anchor matrix is to use the one-hot encoding of cluster membership. This method partitions the data into similar clusters to account for locality and contracts or expands the samples around the cluster centroids. We provide example models trained on the nonlinear (cosine) data when A is defined by varying cluster sizes (Appendix A.2 Figure 4). For larger clusters, more samples are “mixed” to produce one augmented instance, which may result in over-smoothing of the underlying X-Y relation. On the other hand, small clusters lead to less “mixing” and in the extreme case where each cluster has only one member the augmented dataset would be the same as the original. In the general form, $\Pi_A$ provides a “collective” mixing for the samples in a mini-batch to determine a center, and $\gamma$ controls the amount of contraction or expansion around the center. Therefore, when data has (approximately) linear structure, with sufficiently large clusters the one-hot-based centroid is close to the underlying hyperplane, and the cluster contraction (expansion) would preserve the underlying hyperplane. On the other hand, augmenting data with uniformly distributed perturbation directions would still preserve the underlying hyperplane. We believe this is the case with the California Housing dataset. However, for a nonlinear data structure, augmenting data samples in arbitrary directions with the same perturbation magnitude is bound to fail, and more complicated schemes are necessary. Data augmentation for regression is hard. There are very few papers, and the gains are typically small. We (as a community) have not been able to find a data augmentation for regression that is as good as for classification. When the gains are small, we should not expect them to be always better than the baseline. But in some cases, we are significantly better, and ADA never hurts the results. Yes, ADA provides a systematic method to “mix” multiple samples based on collective similarity criteria, which is measured by the anchor matrix A (which we compute in ADA via a clustering approach). When using k-means clustering to generate A, ADA “mixes” the samples which fall into the same clusters. The cluster centroid is computed via the $\Pi_A$ operator, and augmented samples are moved towards or away from the centroid based on the value of $\gamma$ (see Appendix A.2 “Anchor Matrices and Locality” for details). We have not found limitations when using our procedure in high dimensional data, and the linear example at the beginning of the experimental section can illustrate what happens as we decrease the number of data points. The error is too high, and no method helps when there is little data. We obtain the same solution with all the algorithms when there is a lot of data. ADA helps in the middle range on the number of data points when there is not too much or too little data. We will never be in the too-much-data regime in high dimensions and should expect to be in the middle range. But we can also be in the too-little-data regime. In that case, it will be hard to find a good regressor unless additional information about the invariances is available. --- Rebuttal Comment 1.1: Comment: Thanks to the author for their work and patience in answering. --- Rebuttal 2: Title: Acknowledging the rebuttal Comment: Dear reviewer, Thank you for your time and effort. The authors have tried to address you comments in their rebuttal. What do you think about their response? Could you please acknowledge the rebuttal as well as the other reviews. Best, The AC
Summary: In this paper, the authors proposed anchor data augmentation, which borrows from the recently proposed Anchor regression method. The anchor data augmentation uses several replicas of the samples, generated according to the anchor matrix. The proposed augmentation is empirically evaluated both for linear and non-linear models. Strengths: 1. The proposed method is simple and easy to implement. The anchor matrix can be generated by k-means clustering without prior knowledge of the data distribution. 2. The illustrative examples of augmenting the data generated from a cosine model show that the proposed method can generate augmented samples that better represent the underlying data distribution. 3. The proposed method is empirically evaluated on both linear and non-linear models. The results on the linear synthetic data with MLP show that the proposed augmentation can significantly improve the generalization performance. Weaknesses: 1. The standard deviation of empirical evaluations is not reported for sections 4.2 and 4.3. It is hard to verify the statistical significance of the results. 2. It seems the choice of $\gamma$ and design of the anchor matrix $A$ largely remain as hyper-parameters and requires tuning. It would be interesting to see some theoretical justification (or intuition) for choosing $\gamma$ and designing $A$. 3. The improvement over vanilla augmentation is somewhat marginal. It seems that for linear synthetic data with the linear model, California Housing dataset, it is not clear whether the proposed method is significantly better than vanilla augmentation. Technical Quality: 3 good Clarity: 3 good Questions for Authors: When using k-means clustering to generate the anchor matrix, can it be intuitively understood as the augmentation is trying to construct local interpolations to create augmented samples (where the locality is defined by the clusters)? If so, does this method generalize to high-dimensional data well? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive comments and discussion about scalability and intuition on hyperparameter selection. The standard deviations of RMSE and MAPE results are in Tables 1 and 2 in Appendix B.3. The performance of ADA is indeed dependent on the choice of the anchor matrix A. As with AR, we can choose the anchor matrix based on expert opinion about “plausible” or “importance” shifts (invariances) in the distribution. If not available, we can infer that information from the data. One way to select the anchor matrix is to use the one-hot encoding of cluster membership. This method partitions the data into similar clusters to account for locality and contracts or expands the samples around the cluster centroids. We provide example models trained on the nonlinear (cosine) data when A is defined by varying cluster sizes (Appendix A.2 Figure 4). For larger clusters, more samples are “mixed” to produce one augmented instance, which may result in over-smoothing of the underlying X-Y relation. On the other hand, small clusters lead to less “mixing” and in the extreme case where each cluster has only one member the augmented dataset would be the same as the original. In the general form, $\Pi_A$ provides a “collective” mixing for the samples in a mini-batch to determine a center, and $\gamma$ controls the amount of contraction or expansion around the center. Therefore, when data has (approximately) linear structure, with sufficiently large clusters the one-hot-based centroid is close to the underlying hyperplane, and the cluster contraction (expansion) would preserve the underlying hyperplane. On the other hand, augmenting data with uniformly distributed perturbation directions would still preserve the underlying hyperplane. We believe this is the case with the California Housing dataset. However, for a nonlinear data structure, augmenting data samples in arbitrary directions with the same perturbation magnitude is bound to fail, and more complicated schemes are necessary. Data augmentation for regression is hard. There are very few papers, and the gains are typically small. We (as a community) have not been able to find a data augmentation for regression that is as good as for classification. When the gains are small, we should not expect them to be always better than the baseline. But in some cases, we are significantly better, and ADA never hurts the results. Yes, ADA provides a systematic method to “mix” multiple samples based on collective similarity criteria, which is measured by the anchor matrix A (which we compute in ADA via a clustering approach). When using k-means clustering to generate A, ADA “mixes” the samples which fall into the same clusters. The cluster centroid is computed via the $\Pi_A$ operator, and augmented samples are moved towards or away from the centroid based on the value of $\gamma$ (see Appendix A.2 “Anchor Matrices and Locality” for details). We have not found limitations when using our procedure in high dimensional data, and the linear example at the beginning of the experimental section can illustrate what happens as we decrease the number of data points. The error is too high, and no method helps when there is little data. We obtain the same solution with all the algorithms when there is a lot of data. ADA helps in the middle range on the number of data points when there is not too much or too little data. We will never be in the too-much-data regime in high dimensions and should expect to be in the middle range. But we can also be in the too-little-data regime. In that case, it will be hard to find a good regressor unless additional information about the invariances is available. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for answering my questions. I have read the response. I keep my current score unchanged. --- Rebuttal 2: Title: Acknowledging the rebuttal Comment: Dear reviewer, Thank you for your time and effort. The authors have tried to address you comments in their rebuttal. What do you think about their response? Could you please acknowledge the rebuttal as well as the other reviews. Best, The AC
Summary: This paper introduces a new data augmentation method, designed for regression problems. The method is based on anchor regression, where new samples are generated via their projection to the normal subspace spanned by the anchors. The method is evaluated on several datasets and compared with ERM and C-Mixup, showing on par performance. Strengths: The authors address the problem of data augmentation for regression which is an emerging topic in ML, and thus the significance is high. The authors motivate their approach using anchor regression, although I doubt the proposed method and AR are at all related. Thus, originality is questionable, however, I might have missed something crucial. Weaknesses: In general, the paper seems to be technically sound. However, given Alg. 1 and lines 6-7 in it, it is not clear why Eqs. 3 and 4 are even mentioned in the paper, and what is their relation to the method in practice. Essentially, the authors project the data to the normal subspace spanned by $A$. They motivate it via anchor regression (AR), however, it is clear that results totally depend on the choice of $A$, which is somewhat independent of AR to begin with. Also, the authors use indices $i,j$ around equations 7 and 8, but do not specify the indices roles. Thus, one of the main shortcomings of this work is the justification of the approach and its relation to anchor regression is unclear and should be improved. Another shortcoming is the evaluation section. In particular, it is unclear to me how you select the ADA model among the various ones you trained. In addition, it seems like grid search was more extensive in ADA in comparison to other baselines. Also, why noise (vanilla DA) is added only to output? I would like to see the results of adding noise to input and input&output. There are no standard deviation results. The results of ADA are often inferior to the baselines. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: See above Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your direct review and comments about the clarity of the connection between AR and ADA and the derivation of augmentation equations 7 and 8. We will rewrite these sections to improve the fluency and notations. In particular, ADA uses a linear projection as in AR to select perturbation directions considered more relevant based on the similarity of samples in the batch. The augmentation equations (7-8) are row-wise scaled versions of the modified X, Y in AR (5-6), where the scale preserves the (non)linear relation between X and Y after augmentation. The subindices $i$ and $j$ denote the row and column indices of the corresponding matrices as detailed on line 154 of the paper. We understand from your review that the connection between AR and ADA is murky. The variables in A are the main point of AR, as they are the ones that allow AR to robustify the regression solution, and they make the connection between AR and ADA. The variables in A are typically not available. We decided to use clustering to infer A, inspired by the solution in the first example in the AR paper. It is a design choice. We use locality consistency to create our A for data augmentation (C-MixUp does something similar). Other choices might be possible, but we believe clustering is a solid universal choice. We will improve this description in the paper. We did the same hyper-parameter search for all the models and did not go the extra mile to make ours better. The hyperparameters used for the benchmarks were the proposed ones in the C-MixUp paper, where they employ cross-validation with grid-search. Furthermore, we report the results of all the models trained with the mentioned hyperparameters for comparison. We also report the results from the original C-MixUp paper because we could not replicate their results. In other comparison methods, we got better results than C-MixUp reported in [1]. It is important to notice those differences are small and can be due to factors like initialization, random splitting of the data, or the selected optimization procedure. It is not worrisome, but it is notable. We also added noise to the X variable in vanilla DA, and the results were significantly worse. We decided not to add it to the attached PDF because it did not add anything. We can send it to the Area Chair if you want to see the results. The standard deviations of RMSE and MAPE results are in Tables 1 and 2 in Appendix B.3. You mentioned that ADA is sometimes not better than the baseline in your last comment. Data augmentation for regression is hard. There are very few papers, and the gains are typically small. We (as a community) have not been able to find a data augmentation for regression that is as good as for classification. When the gains are small, we should not expect them to be always better than the baseline. But in some cases, we are significantly better, and ADA never hurts the results. [1] Yao, H., Wang, Y., Zhang, L., Zou, J. Y., & Finn, C. (2022). C-mixup: Improving generalization in regression. Advances in Neural Information Processing Systems, 35, 3361-3376. --- Rebuttal Comment 1.1: Comment: Thank you.
Rebuttal 1: Rebuttal: We thank all reviewers for their constructive feedback and suggestions for improving our contribution to NeurIPS, and we hope to have a fruitful discussion in the coming days. We also want to thank the Area Chair for leading this paper review and the discussion. We address each question in our direct responses to the respective review. This global response contains a PDF attachment, in which we present some additional experimental results, which we reference to throughout our responses. First, we show how the ADA results change with different values for $\alpha$ and $k$. Second, we compared the performance of Local Mixup [1] to our baselines and ADA. Last, we added results for two additional datasets to complete the comparison with all the data used by C-MixUp. We are looking forward to the coming discussion with all the reviewers. [1] Baena, R., Drumetz, L., & Gripon, V. (2022). Preventing manifold intrusion with locality: Local mixup. arXiv preprint arXiv:2201.04368. Pdf: /pdf/8e20ac20871a0bc7a9adc68de235645ddbad4263.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors propose a new approach for automatic data augmentation in nonlinear regression, which requires no specific domain knowledge and the added computation cost is low. The idea is borrowed from Anchor regression and Mixups. The idea is to first run a k-means on the data, use the cluster memberships to construct anchor variables and then perform a non-linear anchor regression. They illustrate the performance of their methods on linear and non-linear regression problems. Strengths: The authors did a comprehensive literature review, and it borrows the strength of mixup augmentation and anchor regression. Intuitively, by performing a k-means, it allows the method to borrow strength across samples that are "similar", and the anchor regression further improve the robustness. The experiments also show that the method is especially useful when the number of observations is small. Weaknesses: 1. It is not very intuitive why Anchor regression will help generalization from Section 2.2. I understand the first paragraph, but it is not straightforward from the equation in (3). Why do partial pooling and IV help? 2. There is no theoretical analysis of the performance. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Why do the transformation on the variables but not directly modify the loss function as a nonlinear version of (4)? 2. Any rules on selecting the number of clusters or parameter k? Are the results sensitive to different options? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed some limitations of their work in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review and questions about our paper. ADA relies on AR to augment data samples, and it inherits the generalization properties of AR. The main objective of AR is prediction robustness concerning some directions of perturbations and interventions. Therefore, it is natural to expect augmentation with ADA to improve the out-of-distributions accuracy for some distribution shifts at the cost of reducing the in-distribution generalization. We understand AR helps with generalization by looking at Equations 5 and 6 instead of 3 because these equations show the transformation of the data before applying OLS. The original data X and Y are projected on the anchor variables. If the Anchor variables are binary, representing a clustering assignment, $\Pi_A*X$ is the cluster centroid. The modified samples pull the original data X (and Y) toward the cluster center if $\gamma>1$, and repel them if $\gamma<1$. AR only uses one value for $\gamma$, and we use a multitude of gamma in ADA to be robust in different ways. In that way, ADA does something similar to C-MixUp, but with more diversity while mixing the samples, as in each mini-batch, the number of samples from the same cluster changes. The AR paper has a theoretical analysis in which they prove the benefit of using AR to solve regression problems if there is a shift in the distributions of the anchor variables. But we did not carry out an additional analysis of why ADA provides robustness. We could have modified the loss functions instead of the samples. But we thought it was more natural to change the samples. It is what AR is already doing, i.e., the AR says that solving Eqn. 4 is equivalent to solving OLS with the transformed data (Eqns. 5 and 6). We used a held-out dataset to set the number of clusters, as we have not developed a good rule for setting them. In the attached Pdf, we added a figure to show how the number of clusters affects the solution, and the results only change slightly. --- Rebuttal 2: Title: Acknowledging the rebuttal Comment: Dear reviewer, Thank you for your time and effort. The authors have tried to address you comments in their rebuttal. What do you think about their response? Could you please acknowledge the rebuttal as well as the other reviews. Best, The AC
Summary: This paper design a new mixup data augmentation algorithm for regression problems, which is a challenging field for data augmentations. Specifically, the authors extend Anchor Regression (AR) method as Anchor Data Augmentation (ADA), which utilizes several replicas of the modified samples in AR to generate more training samples. Since AR can provide the data in optimal least square errors once the values of the anchors are known, the proposed ADA generates robust augmented data in the homogeneous groups (achieved by clustering). With comprehensive comparison experiments on linear and non-linear regression problems, the proposed ADA achieves competitive performances with previous mixup approaches, especially C-Mixup (a special-designed mixup method for regression). Strengths: * **S1**: This paper tackles the data augmentation problem in regression tasks, which is not well-studied in the data augmentation community. Despite the proposed ADA using an existing AR method (refer to W1), this paper still has some valuable contributions and is interesting. * **S2**: The designed methods are well-motivated and well-written, which is easy to follow. Meanwhile, the experimental settings and hyper-parameters of ADA are provided in the main text or the appendix, which ensures practical usage and reproducibility. Weaknesses: * **W1**: In comparison to C-Mixup and AR, the novelty of the proposed ADA is relatively weak. The idea of ADA is straightforward, which generates reliable augmented samples in the local scope of the clustered anchors. From my perspective, ADA seems like an improved version of C-Mixup with the existing AR method, which is less novel than these two works. * **W2**: The conducted experiments mainly focus on 1D data (e.g., linear synthetic data, time series prediction) in tabular format. More data modalities are essential to verify the data augmentation for a generalization purpose. For example, C-Mixup provides angle regression tasks with real-world images. * **W3**: The compared mixup baselines are not representative enough. For example, C-Mixup compares the learnable mixup methods (e.g., PuzzleMix [1] and AutoMix [2]) in addition to the vanilla mixup and Manifold Mixup. Meanwhile, more lecture review of published mixup augmentation methods is required in Sec. 2. Considering the pros and cons, I decided to borderline reject this manuscript and look forward to the authors’ feedback during the rebuttal period. #### Reference [1] Jang-Hyun Kim, et al. Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal Mixup. ICML, 2020. [2] Zicheng Liu, et al. AutoMix: Unveiling the Power of Mixup for Stronger Classifiers. ECCV, 2022. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Refer to weaknesses, and I have no more questions. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: I have found no more limitations besides my weaknesses and my concerns. ### Post-rebuttal Thanks for the detailed replies, which addressed my concerns, and I raised the rating to 5. I hope the authors add more experiments and discussions with existing works. Maybe the automatic version of the proposed ADA with fewer hyper-parameters can be the improvement direction. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your balanced review and your suggestions for improvement. The literature on Mixup methods is vast and especially interesting for image classification. There are only two valid approaches for regression (reviewed in our paper). C-Mixup is a solid paper, and it brought some of the Mixup gains to regression problems. Anchor regression (AR) is a method to robustify regression methods in a narrow set of cases. It builds on the causality literature and is not a method for data augmentation per se. ADA (our proposal) has several relevant contributions: 1 We reinterpret AR as a data augmentation. AR and MixUp come from different sets of assumptions and are almost different research fields in ML altogether. Noticing that AR applies to data augmentation is not straightforward and, in our opinion, a significant contribution. 2 Our work empirically shows that using $\gamma<1$ helps with generalization too. In MixUp, the augmentation interpolates samples between the original samples. ADA does that for $\gamma > 1$ and pushes them further apart for $\gamma < 1$. It is an added benefit when using the AR framework for data augmentation in regression. We repeat all the experiments from Sections 5.1 and 5.3 from the C-mixup paper. There are tabular, time series, and image data in that set of experiments. We report the results for the missing datasets in our original submission in the added Pdf file (Table 2). The experiments in Section 5.2 are not reproducible with their publicly available code (comparisons were not possible). Except for the toy cosine data, all our experiments are multi-dimensional. PuzzleMix and AutoMix are solid contributions to the MixUp literature for image classification. We will add them to our literature review. Both methods are not directly applicable to regression problems. In any case, we have also tried to run the code, it was not straightforward, and we cannot present preliminary results here. In the attached PDF, we have also added Local MixUp for comparison (Tables 1 and 2). --- Rebuttal Comment 1.1: Comment: Thanks for the detailed replies and for conducting additional experiments in PDF. I thought the replies addressed my concerns, and I raised the rating to 5. I believe that improving mixup augmentations for regression tasks is a promising research topic, and I hope the authors add more experiments and discussions with existing works. Maybe the automatic version of the proposed ADA with fewer hyper-parameters can be the improvement direction.
null
null
null
null
Reproducibility in Multiple Instance Learning: A Case For Algorithmic Unit Tests
Accept (poster)
Summary: This paper exams whether existing deep multi-instance learning algorithms are indeed "multi-instance learning". Specifically, it proposes a unit test for multi-instance learning (MIL) algorithms. The goal of this test is to examine whether an MIL algorithm satisfies the multi-instance assumption. Two widely-accepted MIL assumptions are tested, the standard MIL assumption, and the threshold assumption. The results show that all attention-based MIL algorithms do not pass the test. However, CausalMIL which is proposed in last year NeurIPS, and mi-Net which is a basic deep extension from mi-SVM, are the only deep MIL algorithms that passed the test. Strengths: 1. The motivation of this work is great. As the test results have shown, all deep MIL algorithms based on attention do not respect the standard MIL assumption. Although the results are not theoreticall surprising as attention performs a weighted average, it is nice to have a paper that formally identify the problem with good experiment support. 2. The proposed unit test is model-agnostic. Therefore, any existing or future MIL algorithms can be tested. 3. The MIL algorithms tested in this work is representative of the current state-of-the-art in classic and deep MIL algorithms. Weaknesses: 1. The writing, formatting, presentation of this work need significant improvement. See my detailed comments for more. 2. The discussion on the implication of the test results should be expanded. There should be more discussion on how the algorithms that passed the test can be better explored in applications such as histopathology image classification, and the algorithms that failed the test should be treated cautiously in downstream applications. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: I have reviewed a previous version of this manuscript. This version has addressed most of my previous concerns by adding some important recent deep MIL algorithms. However, there are still a few point that need urgent improvemnt. **Formatting and writing**: This area need a lot of improvement. I try my best to list some: line 15, "MIL respecting" to "MIL assumption respecting" line 19, "points" to "bags", sample in MIL are not points line 34, "occurs with frequency" to "occurs frequently" line 41-42, the sentence needs rewriting. "that is that"... are difficult to follow. line 45, "as well as a strategy to avoid repeat occurrences" it is unclear what "repeat occurrences" refers to I suggest include a brief test result summary in the introduction. E.g., which algorithms satisfy the standard MIL assumption and which algorithms satisfy threshold assumption. line 184, "testing distribution AUC of <0.05" should be "<0.5" line 186-188, it's better to explicitly say which AUC is for test and which AUC is for trainining. All table should be reformated. Currently, they looks like they are prepared for a two-column submission. line 391-392, these texts only occupy half of the page width. **References also need work**: [4] [10] should have page number, should as Advances in Neural Information Processing Systems xx. Please check how neurips papers are properly cited. [21] "h" should be upper case. [32] [33], "citation key" should be deleted. Also, please check how to properly cite NeurIPS and ECAI published papers. ............. and many more. *I would like to see the authors to provide an updated manuscript with these issues (not limited to those listed above, there most likely will be more) addressed during rebuttal*. **Discussion**: As the Test 1 in Section 4.1 has more implication on the current interest of MIL applications, e.g., histopathology image classification. More space should be allocated for discussion these test results. One way of improving the organization is to divide the test results into classical MIL and deep MIL algorithms. For classical methods, the results are easily expected so you only need to report them. For deep MIL methods, MI-Net/MIL-pooling/Tran-MIL/GCN-MIL/Hopfield failing Test 1 has important implication in applications and should be heavily discussed. As this work empirically shows that attention-based deep MIL algorithms do not meet any of the tested MIL assupmtions, it should provide more discussion on the implication of these results, ideally in a separate discussion/practical implication section. For applications that are suitable for the standard MIL assumption, e.g., histopathology image classification and other medical applications, it is important to develop algorithms that can pass Test 1. In other words, CausalMIL and mi-Net should be expored more in this direction, while attention-based methods should be avoided. For applications that are suitable for threshold MIL assumption, i.e., nature scene classification and others, it is important to develop algorithms that pass Test 2 and Test 3. These discussions can further improve the impact of this work, and possibly improve the correctness and explainability of future MIL algorithms. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors can further improve this work by adding limitations on there exists other MIL assupmtions that this work does not test. See the assumptions mentioned in: Foulds and Frank, A review of multi-instance learning assupmtions. 2010. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We hope the below fully satisfies your concerns. Please let us know if there are any outstanding issues that we did not fully satisfy. W2/Discussion 1: Please see our all-reviewer rebuttal, where we have added a new paragraph that we believe helps to satisfy this concern. If it does not we are happy to expand. > include a brief test result summary We will include the below table with an explanation in revision (italics for claims are implied by the writing but not formally stated, italics for Threshold _Pass_ are because they fail the standard test, which means they can't fully satisfy the Threshold MIL). | | mi-Net | MI-Net | MIL-Pooling | Tran-MIL | GCN-MIL | CausalMIL | Hopfield | mi-SVM | MI-SVM | SIL | NSK | STK | MICA | MissSVM | |-----------------|:--------:|:--------:|:-----------:|:-----------:|:-----------:|:---------:|:---------:|:--------:|:--------:|:----:|:----:|:----:|:--------:|:--------:| | Claim: | Standard | Standard | Standard | _Threshold_ | _Threshold_ | Standard | Threshold | Standard | Standard | None | None | None | Standard | Standard | | Standard Test | Pass | F | F | F | F | Pass | F | Pass | Pass | Pass | F | F | Pass | Pass | | Threshold Tests | F | _Pass_ | F | F | F | Pass | _Pass_ | F | Pass | F | F | F | F | F | We have re-formatted tables to be wider, please see the below example (Table 1): | | mi-Net | MI-Net | MIL-Pooling | Tran-MIL | GNN-MIL | CausalMIL | Hopfield | mi-SVM | MI-SVM | SIL | NSK | STK | MICA | MissSVM | |------------|--------|-----------------|----------------------|-------------------|------------------|-----------|-------------------|--------|--------|-------|----------------|----------------|-------|---------| | Train Acc. | 0.991 | 1.000 | 1.000 | 1.000 | 1.000 | 0.999 | 0.624 | 0.999 | 1.000 | 0.992 | 1.000 | 1.000 | 0.500 | 0.995 | | Train AUC | 0.998 | 1.000 | 1.000 | 1.000 | 1.000 | 0.999 | 0.495 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | 1.000 | | Test Acc. | 0.993 | 0.000 | 0.000 | 0.000 | 0.000 | 0.996 | 0.500 | 0.935 | 0.986 | 0.766 | 0.000 | 0.466 | 0.500 | 0.449 | | Test AUC | 1.000 | 0.000 | 0.000 | 0.000 | 0.000 | 1.000 | 0.488 | 1.000 | 1.000 | 0.998 | 0.000 | 0.000 | 1.000 | 0.551 | We have noted and fixed all other formatting notes you have mentioned. Our citation style was generated using paper organization software. We are creating a new `.bib` file and manually re-entering all cited works based on the host's platform (e.g., ACM) suggested bibtex. >I would like to see the authors to provide an updated manuscript The NeurIPS open review does not allow us to do this. We hope the reviewer recognizes that we have earnestly incorporated every item of feedback and followed through on all our prior commitments in this article's previous review, and will do so again. We believe the paper has been significantly improved, and are glad the reviewer recognized the improvements- and that two of the new reviewers find the paper highly readable thanks to your prior feedback. > Limitations We will add the following text to the revised manuscript. Note that our current work addresses only two of the most common forms of MIL, and does not guarantee a passing algorithm is valid. This is critical when the MIL algorithm under test is in a larger hypothesis class than that considered in this work. As noted in the seminal work of [15 from paper], the MIL model includes radial extensions to the concept class [1], unsupervised MIL clustering [2], and multi-class MIL [3], among others. In all of these cases, our tests may be insufficient to detect issues that are more focused. Future work may develop more general program synthesis to generate tests given MIL constraints or attempt to develop techniques for gradient-based search of a "failure certificate" given a differentiable model and constraint specification. Such future work could eliminate the manual work necessary to devise new tests of our current approach. In particular, the parameters of our test distributions are developed by manual checking, which may not be tenable in more complex hypothesis spaces. (Note, current citation formation in rebuttal is copied from the original website default style as we focus on rebuttal and preparing all updates to the manuscript) 1. ON GENERALIZED MULTIPLE-INSTANCE LEARNING, STEPHEN SCOTT, JUN ZHANG, and JOSHUA BROWN, International Journal of Computational Intelligence and Applications 2005 05:01, 21-35 2. Zhang, ML., Zhou, ZH. Multi-instance clustering with applications to multi-instance prediction. Appl Intell 31, 47–68 (2009). https://doi.org/10.1007/s10489-007-0111-x 3. MULTIPLE CLASS MULTIPLE-INSTANCE LEARNING AND ITS APPLICATION TO IMAGE CATEGORIZATION, XINYU XU and BAOXIN LI, International Journal of Image and Graphics 2007 07:03, 427-444 --- Rebuttal Comment 1.1: Comment: Thanks the authors for the replies. These addressed most of my previously concerns. I raised my score from 6 to 7 in favour of the paper. However, I still suggest the authors to carefully proofread and improve the manuscript. --- Reply to Comment 1.1.1: Comment: We appreciate the score raise and the helpful feedback for the paper. We will be reviewing the manuscript carefully for typos as we work on the revisions discussed with you and the other reviewers. Thank you!
Summary: This work proposes a set of algorithmic unit tests to verify whether multiple instance learning (MIL) models adhere to underlying MIL assumptions. The standard assumption is that a bag of instances is positive if and only if at least one of the instances in the bag is positive, otherwise it is negative (binary classification). The negative instances are considered null (background) instances, meaning the only causal link between instances and bag label is the occurrence of positive instances, and thus models should only be using the presence of positive instances for decision-making. Through the use of three algorithmic unit tests, it is demonstrated that existing MIL models do not adhere to this strict assumption, and instead utilise information from the null instances in their decision-making (i.e., to make a negative bag prediction). The tests are applied to a set of non-deep MIL models (such as SVM models) and deep MIL models. Strengths: **Originality** 1. The work is unique in its exploration of whether MIL models adhere to the underlying MIL assumptions that fit the constraints of their problems. 2. The proposed tests are a novel way of constructing train and test datasets where the assumptions hold but it is possible to detect if the model is learning rules that do not adhere to the assumption. **Quality** 1. The deep models are well chosen. Many pieces of new research build upon these models, so the assumption is that if the original models violate the MIL assumptions, then the more advanced models will also be in violation. 2. The results of the tests are easy to interpret - it is obvious which models are violating which tests. **Clarity** 1. The majority of the paper is easy to read and follow. 2. A strong combination of mathematical notation and algorithmic blocks are used to convey the tests. 3. The results are clearly presented and well discussed. **Significance** 1. The findings that MIL models that are widely utilised in research do not actually adhere to the implicit MIL assumptions they are meant to follow is an interesting result. Weaknesses: **Quality** 1. The way concept classes are defined in the algorithmic tests is a little convoluted. For example, in the presence assumption (test 1), two positive class indicators are used: $\mathcal{N}(0,3)$ and $\mathcal{N}(1,1)$, but these belong to the same concept. It is not stated why two positive class indicators are required and why these belong to the same concept class. This appears to unnecessarily complicates the dataset generation process. **Clarity** 1. $x$ is used in most cases to represent an instance, but Section 3 (lines 156 and 157) use $z$ instead of $x$. I don't see any reason why this is the case - I think it would aid understanding to use $x$ or $x_i$ instead. 2. Line 184 - I believe "... a testing distribution AUC of < 0.05" should be 0.5 not 0.05. 3. The use of $t$ in Section 3 to describe the "number of items" is a little misleading as $t_k$ is also used for thresholds in Sections 3.2 and 3.3. Furthermore, I assume number of items in this case is the number of instances per bag? In this case, it may be easier to utilise the notation of $n$ for number of instances per bag as defined elsewhere in the work. 4. The number of instances per bag in the various unit tests is not clear. There are various parameters ($t$, $b$, $k$) used differently in three algorithmic tests that are sometimes undefined. It would be useful to have a summary of the distribution of bags sizes and witness rates for the train/test datasets for each algorithmic test. Please also see questions below. 5. In Section 4, it is mentioned that there are 100,000 training samples - I assume this is the number of bags, but making this explicit may aid clarity. 6. I think extra clarity is needed in the instance generation notation to show that the instance has $d$ dimensions, for example $x \sim \mathcal{I}(\mathcal{N}(a, b), d)$. This would overcome the need to "abuse notation" in line 202 and make the algorithmic blocks clearer - at the moment if viewed in isolation it appears the items being added to the bags are scalar values rather than $d$-dimensional instances. 7. It would be useful to have a summary table with simple ticks and crosses to state which models pass which algorithmic tests. At the moment it is difficult to get a high-level summary of the findings (having to scroll between three different tables). 8. Typo on line 438 - "onlye". **Significance** 1. The fact that some models do not adhere to the underlying MIL assumptions is an interesting finding. However, the actual implications of models not following these assumptions is not extensively discussed. The impact of the work would be more significant if real world examples can be given that describe why not adhering these assumptions is a problem. My main argument against this work is that even though some models do not adhere to all assumptions, they still perform well on datasets they are applied to in the literature, so why does failing the unit tests matter? What implications does this have for designing/using MIL models going forward? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In each unit test, is the number of instances per bag consistent (maybe not fixed, but equal in expectation) for the positive and negative bags in the train and test distributions? The number of instances per bag does not need to be the same across tests, but I would expect the average bag size to be the same in the train and test distribution within an experiment to ensure this is not a factor in changing the model performance. By the MIL assumptions studied in this work, bags should be invariant to the number of instances per bag, but it is unclear whether this is accounted for in the tests. This could be the case for an additional fourth test that investigates different bag sizes (instances per bag) between the train and test distributions. 2. Following for the above question, does the witness rate (number of positive instances in a bag) remain consistent between the train and test distributions for each of the algorithmic tests? For example, when injecting the poison instances, does the number of true negative instances change to keep the witness rate consistent? I wonder if some models are sensitive to the witness rate (i.e., they have learnt something about the expected bag size and witness rate during training), and if this affects the outcomes of the algorithmic tests. In both questions 1 and 2 above, I'm highlighting potential additional changes between the train and test distribution that could also affect the performance of the MIL models (beyond the intended changes). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations and areas for future work are not clearly listed. Suggested limitations: 1. Algorithm 2 does not have negative bags that contain neither of the $c_1$ or $c_2$ concept classes (i.e., negative bags have either one instance from $c_1$ or one instance from $c_2$). This may impact what the models are learning and is not discussed. 2. The work discusses multiple underlying concept classes, but only uses binary bag labels (positive and negative bags). Some MIL datasets have multiple positive classes, see the SIVAL and Four MNIST-Bags datasets discussed in [1]. In these datasets, null (background) instances still exist, but the different concept classes for instances relate to different positive classes. An extension to this style of problem is not discussed in this work - would the proposed algorithmic unit tests be able to scale to these datasets? I think the work is mostly robust and the findings are interesting, but additional clarity and discussion of the impacts is required to push me towards acceptance. [1] Early, Joseph, Christine Evers, and Sarvapali Ramchurn. "Model Agnostic Interpretability for Multiple Instance Learning." ICLR (2022). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We hope the below answers all of your questions and resolves any concerns in supporting our manuscript. Please do let us know if there are any unresolved concerns. Q1: With the exception of the false-frequency test, which is explicitly testing an errant bias to the frequency of items, yes the bag sizes are approximately equivalent. Our code implementation actually supports a second version of each experiment where the number of background items is increased to make each bag have the same total number of instances, and this does not change any of our results. Q2: Again, the false-frequency test is a test changing the witness rate, but the other two tests have the same average witness rate. L1: This is an excellent point. We have conducted a new experiment that gives 1/3 chance of having only $c_1$, $c_2$, or neither. In this case, all deep learning results are the same except TranMIL which goes from 0% AUC to 89% AUC. While this does not alleviate TranMIL's failure of the test under the original settings, it matches our intuition that the modified test is "easier". For the algorithms to get right, and so we would recommend the current form as the default. The SVMs take a very long time to run, but we will include their results in an appendix with this new information. L2: this is a good point that our tests do not cover all MIL scenarios, including non-binary MIL problems. We will incorporate this into revision and the similar feedback from reviewer 3dcE. Quality: The design is because we take an adversarial approach to the MIL problem: what kind of structure could we develop that has a valid MIL solution, but is more easily "solved" by a non-MIL hypothesis? The MIL problem places no limit on the distribution of any concept class $c_j$, and so we use a multi-modal distribution to create a scenario that has MIL-violating solutions that may appear to better match the training data, but would not generalize to a test set like a MIL-adhearing solution would. In the case of test 1, we will add the following text to clarify: Note that the multi-modal positive class (swapping between $\mathcal{N}(\vec{0}, I_d \cdot 3)$ and $\mathcal{N}(\vec{0}, I_d $ ) is necessary for the test to be effective in practice. Using a unimodal distribution makes the positive signal equally as strong as the `Poisoning` signal, and thus lets more (in this case, all) methods pass when they don't actually support the MIL model. In addition, we find providing a large variance to one of the Gaussian increases the difficulty of learning a wide area of recognition of the positive classes, compared to the tight variance of the `Poisoning` signal. Note that the positive class signal is not an overtly challenging learning problem, but the goal is to make the `Poisoning` signal sufficiently easier to learn that a non-MIL model would reliably prefer to learn the (invalid) hypothesis over the MIL hypothesis. Clarity 1: Our intuition was that $\boldsymbol{z}$ would be clearer, but apparently we were in error. We agree using $\boldsymbol{x}$ is correct and happy to change it accordingly. C2: You are correct, thank you for the catch. C3: In this case, $t$ was meant to be an arbitrary value sampled from the discrete uniform distribution. We will clarify this in revision and change it to $i$ for integer. C4: See Qs C5: Training bags yes, we will fix that in revision, thank you for the catch. C6: Your point is well taken, we propose updating the manuscript to $\mathcal{N}(\vec{a}, I_d \cdot b)$ to make it clear that this is a $d$ dimensional distribution with a mean vector and diagonal covariance. Significance: Please see our all-reviewer rebuttal, which we believe addresses the concern you have raised. In many cases, MIL is a necessary defense against unknown biases that have prevented ML use in clinical settings and a defense against attacks performed against real-world anti-virus systems. The unknown violation of the MIL hypothesis creates unrecognized risk in both settings and broadly insufficient scientific understanding of our field. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal comments. The direct responses to my questions and the general rebuttal have helped provide clarity around the work, especially on the implications for not adhering to the MIL assumptions. I appreciate the additional experiment for L1. I would still like to push for a summary table in the main body showing with simple ticks and crosses to highlight which models pass which tests, potentially placed early on the first or second page. I think this would help showcase the main takeaway that CausalMIL and mi-Net should be the best starting points for new models. I will increase my score in light of the strong rebuttal. --- Reply to Comment 1.1.1: Comment: We are very appreciative of the score raise and glad we could address your concerns! Please do not hesitate to let us know of any other items that arise. We will absolutely include a summary table in the main body to pass which tests, and it has already been written. Thank you again for your time.
Summary: This paper investigates whether multiple-instance learning (MIL) models actually respect the constraints of MIL problems. This paper defines MIL problems as classification of bag-of-instances, such that the bag is only classified positive if any instance is classified positive (or a more complex rule based on positive classes of the instances, but crucially, not on negative instances, cf Eq 1 in the paper). Therefore, contrary to classification problems, there is an asymmetry between positive and negative classes in MIL problems. The authors design tests to check whether proposed MIL models structurally enforce this constraint. More specifically, they design 1 test for the presence MIL, a more complex test for threshold-MIL, and a test to check that for the latter test, no degenerate rule is found. Each test is based on synthetic data, and are designed to fool models not enforcing the MIL constraint. Experimental results demonstrate that many deep MIL models actually do not enforce the MIL constraint as understood by the authors, except for 2 of them (Table 1). Strengths: - Well written paper, congrats to the authors for the clarity - The methodology is sound - The experimental results are compelling Weaknesses: - Limited impact: this paper shows that what the community calls "MIL models" is closer to "set-of-instances classifiers". I think it does not mean that they are irrelevant in some problems; but it is true that if there are issues at stake, one cannot trust these models for enforcing the MIL constraints. To sum up, the main impact of the paper for the community would be to clarify what "MIL" actually means. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - The motivation of the paper remains blurry to me. Why is it so important that MIL models enforce what the authors present as the natural structure of MIL problems? The sentence "a MIL model cannot **legally** learn to use $\emptyset_P$" L245 seems to indicate there are regulatory or legal issues at stake, but more details would be appreciated. - I fail to understand the motivation for the false-frequency reliance test: what is the problem of a true MIL model not passing this test? If there are no instance-level annotations or additional prior on the problem, I do not see why such models would not be used. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We hope the below fully addresses your concerns. Please do not hesitate to let us know if any further clarification is required. > Why is it so important that MIL models enforce what the authors present as the natural structure of MIL problems? Please see the paragraphs we added in the all-reviewer rebuttal, which we believe helps to articulate why MIL is important. As you noted in the weakness, the current "MIL" models that violate the assumptions may still be very valuable as set-classifiers, but it is thus important that we compare like-to-like, so that we can extract what good set-classification techniques have been inadvertently labeled "MIL". >a MIL model cannot legally learn to use We now see this was not the best word choice on our part. There is no regulatory constraint at stack (as far as we are aware). The purpose was to convey that learning to use $\emptyset_P$ is outside of the hypothesis space of MIL models, and so its use indicates a violation of the MIL hypothesis. We will re-word accordingly. >the motivation for the false-frequency reliance test (Alg. 3) Let $\mathcal{H}_0$ denote the hypothesis class of the standard MIL model, and $\mathcal{H}_1$ the threshold MIL. It is the case that $\mathcal{H}_0 \subset \mathcal{H}_1$. Two important questions are 1) Can the model I'm using represent $\mathcal{H}_1$ 2) Can the learning procedure successfully produce solutions from all of $\mathcal{H}_1$ when necessary? For most of the deep models (i.e., no miNet), the answer to (1) is Yes. However, our test shows that (2) is sometimes false (and false for different algorithms compared to Alg 2. in the paper). Instead, an incomplete solution is found and is important in understanding the scope of capabilities/limitations of the algorithm under test. The false-frequency test is also relevant to different properties of interest. For example, if many background objects might be expected in deployment (e.g., benign health cells), then even just low performance (without failing) on Alg. 3 may give caution that the approach may not work well. We will work these points and clarification into the revision. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thanks for your answers, which have clarified the motivation for the paper as well as the questions I had. I will update my score to 7. --- Reply to Comment 1.1.1: Comment: We are very glad we could resolve your questions and appreciate the raised score, thank you! Please let us know if any other questions arise.
Summary: The paper deals with multiple instance learning (MIL), where a collection of items is considered in a bag/collection, whereby presence of certain items in the bag implies a positive label for the whole collection, and otherwise the collection has a negative label. The paper discusses prior MIL methods that do not respect implicit MIL assumptions (i.e., learning a degenerate solution), and proceeds to develop algorithmic unit tests to check if a model satisfies those. Their main contribution is: 1. Designing a unified framework for checking MIL assumptions by creating synthetic datasets that check for one or more of the implicit MIL assumptions 2. Running experiments to show how often algorithms fail these tests despite being the most recent ones. 3. Arguing that not passing these tests means a model is not performing MIL correctly, however, passing these tests does not imply certification. Strengths: The problem setting and idea is generally interesting. The work seems novel with detailed experimentation. Weaknesses: I believe the paper’s writing/organization can be improved. For example: 1. Lines 51-66 discusses the paper’s sections. It would strengthen the paper if instead the paper discusses at least a few of the algorithmic unit tests/MIL assumptions that would give the reader a better understanding of what the paper is trying to do. Reading the introduction does not give specific information like this. Also highlighting how an example popular MIL method fail an important MIL assumption would also make the introduction much more intriguing. 2. Also, **there is no mention of reproducibility in the whole paper other than the title and related work**. I am genuinely curious about how reproducibility is related to the algorithmic unit tests here, or what “reproducibility” means in this context. some clarification from the authors would be important. 3. The paper is missing a “preliminaries” section, and it is not self-contained. It ought to contain a section describing the problem setup of MIL. Also I think the theorems are rather straightforward and do not contain any particularly interesting insight. They can be seen as **tautological** with the constructed tests. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Line 163, is $g(\{c_1, \ldots, c_K\})$ a function or a value of the function at a particular point? What is its domain and range? A better notation would be $g: N_{\geq 0}^d \rightarrow \{+1, -1\}$. 2. Line 250, why is the presence based test needed if it is a subset of the threshold based test? An explanation should be included in the main paper. 3. Line 209, How does $N(0, 3)$ denote a normal distribution in d-dimensional space? What does 0 and 3 refer to here? 4. Line 304, “learn a noisy but erroneous …”, not sure what is the difference between a noisy and erroneous function, I assumed noise → erroneous? 5. I would be curious to see some sort of exploration with real-world MIL benchmarks (instead of the synthetic datasets used here), why do they not check for these MIL assumptions/why would good performance on these benchmarks not correlate with satisfying MIL assumptions? What does that tell us about the MIL problem setting in general/prior benchmarks? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are at response limit. Please let us know if anything was not satisfied. Q1: $g()$ is a function, the domain is $\forall k \in [1, \ldots, K]$ that $c_k$ is an integer $\geq 0$ (though it could be relaxed to a continuous non-negative as well, this has no impact on our results, but MIL is commonly thought of in integer counts). We believe your notation proposal is correct, though we have not seen in used in the MIL literature off the top of our heads. We are happy to consider this notation alternative. Q2: The presence test is needed because an algorithm may only be designed to satisfy the presence version of the MIL problem. This occurs with the `mi-Net` algorithm, which is designed for and passes the presence test, but fails the threshold test. `mi-Net` indeed satisfies all algorithmic claims it makes, and so should not be "penalized" for failing the threshold test. This is relevant in the malware case: if one malicious function is present at all, the binary is malicious. The "amount" of malicious content does not change this fact. In the clinical case, sometimes thresholds are the critical differentiator. e.g., mold is present and measurable in all places, but is only considered a problem if the amount measured is beyond an acceptable threshold. Q3: This denoted a $d$ dimensional normal where the mean $\boldsymbol{\mu} = [0, 0, \ldots, 0]$ and the covariance $\Sigma = I_d \cdot 3$. Based on your and reviewer BYpB's feedback we will revise the paper to use a notation of $\mathcal{N}(\vec{0}, I_d \cdot 3)$ to make it clear that the mean is a vector with a diagonal covariance. Q4: The reviewer's point is well taken, our verbiage was redundant in this context since the problem can be solved without any noise, and so any level of noise would imply at least a degree of error. We will revise it to just "erroneous" in that section. Q5: The CausalMIL work (citations 32, 33 in the article) has already shown empirically that respecting the MIL assumptions can lead to significant gains in accuracy. It would be hard to adapt our benchmarks to "real world" data because we would need to label the bag-level information for every data point to construct our testing approach. The benefit of synthetic tests like this are: 1. Once it is shown to fail, it does not matter what any other dataset may look like, because an existence certificate of non-MIL behavior has been achieved. 2. This makes it easy to apply our tests to any MIL model (e.g., many computer vision-based MIL algorithms would be hard to adapt to NLP/cyber security problems). We have added new text to an all-reviewer rebuttal that we believe will satisfy this concern. > Lines 51-66 discusses the paper’s sections. It would strengthen the paper if instead the paper discusses at least a few of the algorithmic unit tests/MIL assumptions that would give the reader a better understanding of what the paper is trying to do. Reading the introduction does not give specific information like this. Also highlighting how an example popular MIL method fail an important MIL assumption would also make the introduction much more intriguing. Alg. 1 and 2 test the fundamental property of the MIL hypothesis: that you can not make a positive prediction ($y =1$) based on the absence of an instance $\boldsymbol{x}_i$ from the larger bag $X$. The difference between them is that in Alg. 1, there is only one instance necessary to make a bag label become positive. In Alg. 2, there are two instances that must occur for a positive label. This distinction is important as it sperates two versions of the MIL problem: the Standard MIL (Alg 1) and Threshold MIL (Alg 2). Alg 3 tests that a Threshold MIL algorithm must be able to count and recognize two different items to detect a positive bag. As an example of how an algorithm can fail the MIL model, consider the case of cancer detection. The goal is to detect any _instance_ of a cancerous cell, which indicates the whole sample (i.e., _bag_) is infected. However, noise in the data may cause spurious items to correlate with a non-cancerous prediction: mislabeled data, artifacts from the imaging equipment, the tool used for excising the sample, the clinician who collected the samples, and more could correlate with a benign prediction. A non-MIL model is free to learn to use these spurious anti-correlated features. But in deployment, the physician, hospital, and equipment, all change -- and so the non-MIL model's accuracy may drop. Indeed, the failure of medical ML models to generalize has been a noted problem (see all-reviewer citations), and so ensuring MIL assumptions can help avoid these high-risk failures. We will add all the above to the introduction. > Also, there is no mention of reproducibility in the whole paper other than the title and related work. I am genuinely curious about how reproducibility is related to the algorithmic unit tests here, or what “reproducibility” means in this context. some clarification from the authors would be important. Many papers that study reproducibility focus not on generating the same results, but on issues in procedure that would prevent robust conclusions about utility. This includes methodological issues like not tuning baselines [1], or insufficient statistical tests [2]. Our work falls in this latter group of identifying a methodological issue, in not validating the hypothesis constraints. 1. https://doi.org/10.1145/3298689.3347058 2. https://proceedings.mlsys.org/paper_files/paper/2021/hash/0184b0cd3cfb185989f858a1d9f5c1eb-Abstract.html > The paper is missing a “preliminaries” section, and it is not self-contained. It ought to contain a section describing the problem setup of MIL. We attempted to interleave this with the introduction. The only detail we might say is missing is that most MIL algs work by learning an instance classifier $h(x)$, and using $\hat{y} = \max_i h(x_i)$ to make a prediction. We will add to the introduction. --- Rebuttal Comment 1.1: Comment: I thank the authors for writing compact answers to my questions and thoughts. The authors answers improved my understanding of the paper, where I misunderstood a few aspects before that are now clarified. I have increased the score from 3 to 6 accordingly. --- Rebuttal Comment 1.2: Comment: I would suggest improving the paper's writing. There are few grammatical mistakes which makes the paper less readable. 1. Line 370-371: The goal is that models are tested to the properties the properties to have. In addition to many more typo/writing issues reviewer 3dcE has mentioned. Furthermore, a discussion of real-world MIL-benchmarks/settings [1, 2], their construction and if their construction respect/do not respect the MIL assumptions here would strengthen the paper and showcase its novelty. [1] mil-benchmarks: Standardized Evaluation of Deep Multiple-Instance Learning Techniques, https://arxiv.org/abs/2105.01443 [2] Multiple Instance Learning: A Survey of Problem Characteristics and Applications, https://arxiv.org/abs/1612.03365 --- Reply to Comment 1.2.1: Comment: We appreciate the raised score in light of the clarification. Thank you! We have fixed all mentioned typos and will carefully review to correct any remaining typos. 2105.01443 : This appears to be 2 versions of the Standard MIL formulation, 1 version of the Threshold MIL, 1 appears to be a version of the extended GMIL per Foulds & Frank, and one that might be a Counting-GMIL per Foulds & Frank. 1612.03365: This has three synthetic/semi-synthetic tasks (amongst other real datasets). Newsgroups, Letters, and Gaussian, are all standard MIL. We will add these related works and a discussion about prior synthetic MIL datasets to the camera ready. We will emphasize that we are not the first to create synthetic data with the goal of testing specific properties and their identifiability by a MIL model. However, to the best of our knowledge, ours is the first work to create adversarial test sets with a MIL solution, and a non-MIL solution, as a way to test that a model restricts itself to a valid hypothesis. 1612.03365 is also an excellent catalog of many real-world datasets for MIL, as you note, that highlights the importance of understanding (1) which version of the MIL hypothesis we wish to leverage, and (2) why it is important to respect the (desired) MIL hypothesis. This will also be worked into the discussion. We hope this further clarifies any concerns, and we are again appreciative of your time. Please let us know if any further thoughts remain, and we hope you have a great weekend!
Rebuttal 1: Rebuttal: We are pleased most reviewers found our paper readable, sound, and technically novel in identifying a previously undocumented issue in the Multiple Instance Learning literature. One shared note of reviewers was that we could more strongly communicate the importance of this to readers outside the standard MIL audience. We will add the below text to the manuscript and hope it satisfies your concerns. Algorithms that fail, or intentionally forgo, the MIL constraints may appear to obtain better accuracy "in situ" (i.e., the lab environment). But if it is known that the MIL assumption is true, ignoring it creates a significant risk of failure to generalize "in vivo" (i.e., in real production environments). In the clinical context, this is important as many ML algorithms are often proposed with superior in situ performance relative to physicians [1], but fail to maintain that performance when applied to new clinical populations [2,3,4]. In this case, respecting underlying MIL properties eliminates one major axis of bias between situ and vivo settings and higher confidence in potential utility. In the cyber security space, respecting the MIL nature eliminates a class of "good word" style attacks [5,6,7] where inconsequential content is added to evade detection, an attack that has worked on production anti-virus software [8]. These reasons are precisely why MIL has become increasingly popular, and the importance of ensuring the constraints are satisfied. For these reasons, we would suggest practitioners/researchers begin with CausalMIL and mi-Net as a solid foundation to ensure they are actually satisfying the MIL hypothesis, and thus avoiding excess risk in deployment. Notably, this creates a dearth of options when more complex MIL hypotheses are required, as CausalMIL and mi-Net succeed by restricting themselves to the Standard MIL assumption. The creation of MIL models that satisfy this, and other more complex hypothesis, are thus an open line of research that would have potentially significant clinical relevance. Similarly, users with more niche MIL needs may desire to more thoroughly test their models respect the constraints critical to their deployment. Our work has demonstrated that many articles have not properly vetted the more basic MIL setting, and so we suspect other more complex MIL problems are equally at risk. 1. Poore GD, Kopylova E, Zhu Q, Carpenter C, Fraraccio S, Wandro S, Kosciolek T, Janssen S, Metcalf J, Song SJ, Kanbar J, Miller-Montgomery S, Heaton R, Mckay R, Patel SP, Swafford AD, Knight R. Microbiome analyses of blood and tissues suggest cancer diagnostic approach. Nature. 2020 Mar;579(7800):567-574. doi: 10.1038/s41586-020-2095-1. Epub 2020 Mar 11. PMID: 32214244; PMCID: PMC7500457. 2. Abraham Gihawi, Yuchen Ge, Jennifer Lu, Daniela Puiu, Amanda Xu, Colin S. Cooper, Daniel S. Brewer, Mihaela Pertea, Steven L. Salzberg bioRxiv 2023.07.28.550993; doi: https://doi.org/10.1101/2023.07.28.550993 3. Varoquaux, G., Cheplygina, V. Machine learning for medical imaging: methodological failures and recommendations for the future. npj Digit. Med. 5, 48 (2022). https://doi.org/10.1038/s41746-022-00592-y 4. Wynants L, Van Calster B, Collins G S, Riley R D, Heinze G, Schuit E et al. Prediction models for diagnosis and prognosis of covid-19: systematic review and critical appraisal BMJ 2020; 369 :m1328 doi:10.1136/bmj.m1328
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Language Models Can Improve Event Prediction by Few-Shot Abductive Reasoning
Accept (poster)
Summary: This paper studies using large language models (LLMs) to assist event prediction. The authors proposed a pipeline of procedures, where 1) a traditional event sequence model is first applied to predict a set of possible events along with their time stamps, 2) then for each event candidate, an LLM is used to reason about their cause events, based on which a set of relevant events are retrieved from the history, and 3) finally the retrieved relevant events are used by a reranking model to score each event candidate. This framework is tested on two datasets, GDELT and Amazon Review, and the results showed that the LLM-based reranking approach outperforms the naive event sequence model. Strengths: 1. This work studies leveraging LLMs for temporal modeling. This is an important topic as it can be used to support many applications (e.g., political event forecasting). Using LLMs for reranking also seems to be novel. 2. The authors have conducted experiments on two datasets, both showing an advantage of the LLM-based reranking. 3. This paper also comes with analyses of the proposed framework, such as how tuning the size of event candidates (M) can affect the overall performance and whether the LLM is sensitive to different variants. Weaknesses: 1. The paper needs significant justifications for the experiment design. - While the authors formulated the event prediction task into two subtasks, the majority of the experiments have focused on only one of them (i.e., event type prediction), whereas the discussion on the time stamp prediction is rather brief. If the authors regard the time prediction task as an important subtask, then they may want to consider using a dataset that can facilitate more careful discussions other than GDELT. - The event prediction on GDELT is not end-to-end. Instead, the current experiments always assume one or two (ground-truth) components from each <subject, predicate, object> triplet and aim to predict the remaining. This could show a discrepancy between the evaluation and the realistic application. - More clarifications about each dataset are needed. For example, it is unclear how many events are provided as history in each prediction. 2. Results in Figure 2 and several other figures showed worse performance as M increases. This could say that as the LLM-based reranker is exposed to more candidates, it ranks the truth even worse. Looking into the description of Mean Rank in lines 250-256, it can be imagined that as M increases, there will be more test cases (whose ground truth is included in top-M) get counted into "N". In other words, when M has different values, the Ns have been different, so the results of the same approach across different Ms are not really comparable. However, this is not clarified in the paper and the current plots read confusing. 3. The proposed framework heavily relies on the event sequence model. Based on Figure 3, it seems that the poor performance of the event sequence model is the real bottleneck (<0.1 MAR in Fig 3(b)). In other words, while introducing LLMs does lead to improvement in event reranking (under the current experimental setting), it is not solving the real bottleneck. Therefore, I'm not very convinced by the contribution of this framework. Instead of reranking, a more worth-studying problem is how to use LLMs to more accurately predict future events in the first place. 4. More explorations on the use of LLMs in reranking can provide more insight to the community. It is noticed that, while the contribution of this paper is about using LLMs for event reranking, none of the baselines or variants are following the same reranking idea. Is there a better way to use LLMs for event reranking? For example, can one use SentenceBERT or BM25 to retrieve relevant events from the history, and then directly use LLMs to score the candidate event? 5. Some of the writing can be more self-contained. For example, the paragraph in line 51 introduced several math formulations but there is not a single reference, and the intuition of these formulas is not explained clearly. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please try to address my comments in Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors discussed the limitations and potential negative impact in Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback! > consider using a dataset that can facilitate more careful discussions other than GDELT. We added new results on a new dataset, ICEWS. It is similar to GDELT but less dense in time so it is meaningful to predict time on this new dataset. Please see [New Dataset] for details. In the final version, we will include a full set of experiments on ICEWS (all baselines, all analysis) just as we did for GDELT. > assume one or two (ground-truth) components from each <subject, predicate, object> triplet and aim to predict the remaining. This could show a discrepancy between the evaluation and the realistic application. Our evaluation setup matches real-world applications. Please see [About Experiment Design]. > More clarifications about each dataset are needed. For example, it is unclear how many events are provided as history in each prediction. We will add this detail to the final version: "history" includes all the previous events in the data. > when M has different values, the Ns have been different, so the results of the same approach across different Ms are not really comparable. However, this is not clarified in the paper and the current plots read confusing. Sharp catch! Yes, our current text in Line-264 to 266 is misleading. We didn't mean to compare the same method across M; instead, we meant to compare different methods for the same M. We apologize and will definitely correct our presentation. > proposed framework heavily relies on the event sequence model… it is not solving the real bottleneck… a more worth-studying problem is how to use LLMs… Which's real bottleneck seems to be a subjective call? Please see [Contribution, Significance, and Impact] for our justification of the problem setting and our technical approach. > none of the baselines or variants are following the same reranking idea. Is there a better way to use LLMs for event reranking? > can one use SentenceBERT or BM25 to retrieve relevant events from the history, and then directly use LLMs to score the candidate event? In [New Baselines], we added several new baseline methods that follow the same reranking idea of our framework. One of them retrieves with sBERT and reranks with our method, but it works poorly. That is because sBERT tends to retrieve events that are similar to the proposal (and BM25 will as well), but the real causes may look very different. Fortunately, LLMs may generate real causes that look very different. We suspect that using LLMs as scoring function will only make the performance worse because - LLMs do not seem to be good at numbers as they are good at words; - LLMs are pretrained but our proposed reranking module has been trained on the task (so it is more specialized). > Some of the writing can be more self-contained. For example, the paragraph in line 51 introduced several math formulations but there is not a single reference, and the intuition of these formulas is not explained clearly. We are sorry that we made this background paragraph too compact due to the page limit. In the final version, we will appropriately label these formulas, refer to them when needed, and clarify the intuition. --- Rebuttal Comment 1.1: Comment: Thank you for your response (to me and other reviewers)! It well addressed my concerns. The newly added experiments are helpful. I'd like to see them along with the justification of the task setting and your contribution in the next draft. I've raised my score. --- Reply to Comment 1.1.1: Title: Thank you for your support! Comment: Thank you very much for your support! We are thrilled to hear that our response has addressed your concerns. And we are definitely committed to improving our presentation for the next version. Later in Discussion phase, we will post our plans for improving the paper. (We will hold it until then in case that any reviewer requests anything new.)
Summary: This paper studies the problem of predicting future world events based on the past and proposes an approach that combines the existing event sequence models with the powerful (abductive) reasoning ability of large language models (LLMs) that results in better performance both for predicting the actual event as well as predicting the time of the event. Strengths: * Predicting future world events based on the past is an important problem. * The idea of leveraging LLMs for future event prediction is interesting and can spark further research in this direction. * Experimental results are strong, albeit on a small set of datasets. Weaknesses: * The LLM prompts presented on Page 4 are a bit worrisome as they hint at a possible leakage: The time of the first demonstrating example is 2022-03-08 and so is the time of the queried effect. Could you please clarify how you selected the fewshots, and whether there might be any leakages? * I’m a bit confused about the metrics. From what I understand, the performance of the base Event Sequence model is computed on all the data, but the performance of the LLM-augmented models is only computed on the examples where the correct answer was among the top answers of the base model. Is this correct or did I misunderstand something? * Some of the presented results on the two datasets are strong, but given the simplicity of the proposed solution, it would have been more convincing if results on more datasets were reported, especially because the mean rank results on the Amazon dataset is not that glaring. (To be clear, I like the simplicity of the approach) * Given that the LLM module is the main novelty of the work, I would expect to see some experiments where the LLM is replaced with 1) a random selection module that selects 10 pieces of evidence at random, 2) a heuristic-based selection module that, e.g., selects the most recent 10 pieces of evidence with the same subject. * Some qualitative examples are needed to show how the LLM module actually helps make better predictions. * [Minor] The font in Figure 1 is a bit small when printed. Maybe make the size of box 3 smaller (rotate the text if needed) so that the other parts become slightly bigger. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * The abductive reasoning module is quite similar to the backward reasoning works such as [1, 2, 3]. Could the authors say a few words about whether it is possible to bring more of backward reasoning into their framework? As an example, can we predict multiple hops of causes instead of one (e.g., for an event E, predicting a cause C and then predicting a cause C’ for C, …)? * Any idea how all three modules can be trained end-to-end instead of training separate modules? * Are the Event Sequence Model and Reranking Model both trained on the same subset of the data? * [Minor] Why not use Mean Reciprocal Rank instead of Mean Rank? The former has been advocated in many previous works. * [Minor] LLMs can hallucinate causes and it seems like all causes will be mapped to an event. I wonder whether it helps if the authors ignore the causes whose similarity to existing events is below some threshold. * [Minor] In line 150, how is time represented so that the model can know about the distance between the events? [1] LAMBADA: Backward Chaining for Automated Reasoning in Natural Language [2] Neural story planning [3] Language Models with Rationality (I understand that this paper was published too close to the NeurIPS deadline) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: * It is typical in real-world problems to have both temporal and non-temporal facts/events. Can the approach be extended to this case? * It is typical in real-world problems to have temporal events that have a timestamp and temporal events that have a time interval. Can the approach be extended to work with intervals? * Can the model predict that no event will happen? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback and being supportive! We added new results for you; please see [New Baselines] and [New Dataset]. Now we'll answer your remaining questions. > The time of the first example is 2022-03-08 and so is the time of the queried effect… clarify how you selected the fewshots, … any leakages? We randomly sampled training events and manually found evidence/cause events for each of them. Raw data has HH-MM-SS information so we know the temporal order of same-day events. For presentation simplicity, we omit HH-MM-SS in examples. We'll clarify it (or use a better example) in final version. We have taken efforts to ensure that there is no leakage in data. For GDELT and ICEWS, we use the data that is surely not in GPT training data. App C.5 in paper explains why GPTs do not memorize Amazon Review data. For each dataset, we split train/dev/test based on time-stamps of events (see Line 214). > I’m a bit confused about the metrics… All methods are evaluated where the correct answers were among the top answers of the base model. That is why we feel necessary to show MAP and MAR of base models (see Figure-3): MAP and MAR measures precision and coverage of each base model; mean rank and RMSE measure how much better our framework could do given that precision and coverage of base model. > it would have been more convincing if results on more datasets were reported Sure! Please see our new results in [New Dataset], and we hope you like them! > 1) selects 10 pieces of evidence at random, 2) selects the most recent 10 pieces of evidence with the same subject… We added these new baselines (and more), and they all underperform our framework; see [New Baselines]. > font in Figure 1 is a bit small… make the size of box 3 smaller (rotate the text if needed)... Great suggestion! We'll do it in the final version. > say a few words about whether it is possible to bring more of backward reasoning into their framework? > predict multiple hops of causes instead of one (e.g., for an event E, predicting a cause C and then predicting a cause C’ for C, …)? We drafted a long and in-depth discussion for these questions, but we couldn't show it in this message due to 6000-char limit. Here we only give TLDR, but we can post full discussion via an Official Comment in Discussion phase, if you are interested. We will add full discussion in final version. Abductive reasoning is a type of reasoning ability, in parallel with deductive reasoning and inductive reasoning. Backward chaining is a general method for reasoning, in parallel with forward chaining and resolution. In this work, we leverage abductive reasoning abilities of LLMs, and our proposed method resembles backward chaining (although it only does 1-step reasoning). When data is complete (i.e., full observed, no missing events), 1-step seems to be good enough: both C' and C are in history, and retrieving C seems more proper since C is more recent. When data is incomplete, multi-step reasoning seems to be necessary: maybe C is not observed in history but C' is, so we have to do a 2nd step reasoning and retrieve C'. This is an extension of our current method. > Any idea how all three modules can be trained end-to-end Yes. Conceptually, one can do: - use retrieved causes to regularize (e.g., attention mechanism of) event sequence models - prompt LLM so that it generates cause events that tend to have high reranking scores These can be future directions and we will discuss them in the final version. > … both trained on the same subset of the data? The use same set of training data. But reranking uses negative samples drawn from event model. They are both early-stopped based on dev results. > Why not use Mean Reciprocal Rank We thought Mean Rank is the norm of the IR community. We can add Mean Reciprocal Rank in the final version. > I wonder whether it helps if the authors ignore the causes whose similarity to existing events is below some threshold. We tried this idea and showed new results in [New Baselines]. There is a small difference (thresholding worse), but a heavier tuning of threshold may close the performance gap. > In line 150, how is time represented… model can know about the distance between the events? In the continuous-time Transformer of Yang et al. 2022, time is represented via a temporal embedding mechanism (a generalization of positional embedding) so high-layer representations are aware of the temporal distances between events. > It is typical in real-world problems to have both temporal and non-temporal facts/events. Can the approach be extended to this case? Yes, we believe that the general propose-LLM-retrieve-rerank pipeline can apply to other settings, although they may need non-trivial technical extensions. Here is how it may apply to language-based logical reasoning: given a theory of multiple statements, a base model can propose possible conclusions; LLM proposes possible explanations for each proposed conclusion; retriever finds out most relevant statements from the theory; reranking module examines if the retrieved statements can actually prove the proposed conclusion. We will add this discussion in our final version. > Can the approach be extended to work with intervals? If the base event sequence model is capable of handling intervals, then it seems straightforward for our framework to handle intervals as well (through the base model). > Can the model predict that no event will happen? That is the duty of base event sequence model. If the most recent event is at time t0, and the base model predicts the next event at time t1, then any time in (t0, t1) is regarded to be "no event happens here". The base models we use in this paper are all probabilistic point processes, so "no event will happen ever from now" has zero probability under the base model. > qualitative examples needed We'll add them in final version. See [Qualitative Results]. --- Rebuttal Comment 1.1: Title: Reviewer Response Comment: Thanks for the detailed response, and for providing the extra analysis and results. * **Regarding leakage**: Thanks for clarifying. My main concern with respect to leakage is now resolved. Although I still believe it would be better to considered a larger time gap than a few minutes or hours between train and test sets to ensure no unwanted leakage. Maybe providing some statistics on the time difference quantiles between test set questions and the latest fewshot example could further resolve the issue. * **Regarding baselines**: In the case of the ANHP-rec-10-evt baseline, are you using the 10 most recent events with the same subject, or just the 10 most recent events? If the former, do you have any intuition why it is performing so poorly (even worse than random)? * **Regarding single vs multi-hop**: I found your response on multi-hop mostly being helpful when the data is incomplete quite helpful and insightful. I strongly suggest adding it to the next draft. * **re min rank vs mean reciprocal rank**: The problem with min rank is that one bad prediction can substantially outweigh many good predictions. This is not the case for mean reciprocal rank, though. So I often find comparisons in terms of MRR more meaningful than MR. --- Reply to Comment 1.1.1: Title: New Results - Leakage, 10-Most-Recent-Same-Subject, MRR Comment: We highly appreciate your engagement and prompt reply! We ran more experiments and hope the new results further address your concerns. [More Results for Leakage] > consider a larger time gap… statistics on the time difference quantiles between test set questions and the latest fewshot example… Definitely. Our few-shot demonstrations cover a wide time range of training data, and there is indeed a big gap between train and test. Precisely, the latest demonstration is on 2022-06-07, and the earliest test event is on 2022-07-15 (and dev events stay in-between). Below is a table of percentiles (in days): | 0% | 1% | 5% | 25% | 50% | 75% | 95% | 99% | |-----------|-----------|-----------|------------|--------------|------------|------------|-------------| | 38 | 40 | 41 | 44 | 52 | 54 | 55 | 55 | [Recent Events with Same Subject] > 10 most recent events with the same subject, or just the 10 most recent events? > do you have any intuition why… even worse than random ANHP-rec-10-evt only uses the 10 most recent events, and we just experimented with ANHP-rec-sub-10-evt, a new baseline that uses the 10 most recent events with the same subject. Their MRR results are in [MRR Results]. The takeaways are: - ANHP-rec-sub-10-evt is slightly (but almost negligibly) better than ANHP-rec-10-evt. - both are worse than random. Why does "most recent" work so poorly? It is perhaps because "recent" is a bad inductive bias. On dev data, we analyzed the cause events retrieved based on LLM-generated clues and found that many of them are not "recent": for each dev event, we track the time gap between it and its closest retrieved evidence event; then we compute the percentiles of these time gaps; the table of percentiles (in days) is shown below. | 5% | 25% | 50% | 75% | 95% | |-----------|-----------|-----------|------------|--------------| | 0.54 | 2.15 | 3.81 | 24.79 | 38.66 | The "same subject" inductive bias is not very helpful for predicate prediction. When predicting predicates, the subject and object are known, and thus the negative samples (given by base model) will all have the same subject. As a result, all the candidates will have the same set of "evidence", making it very difficult for the reranking model to learn to figure out which predicate is correct. However, "same subject" inductive bias may be helpful for object prediction and predicate-object joint prediction. Therefore, we have been evaluating the new baselines on these kinds of prediction tasks, and we will post an update as soon as possible. [MRR Results] > MRR more meaningful than MR. Lessons learned. Thanks! We evaluated all methods with MRR as well, and the ranking of methods remains the same under this metric; please see the table below for predicate prediction results on GDELT. | M | 2 | 3 | 4 | 5 | |----------------------------------------|----------------|---------------|---------------|-----------------| | ANHP | 0.7316 | 0.6234 | 0.5641 | 0.5397 | | ANHP-rnd-10-evt | 0.7185 | 0.6013 | 0.5250 | 0.4675 | | ANHP-rec-10-evt | 0.6078 | 0.4301 | 0.3534 | 0.3106 | | ANHP-rec-sub-10-evt | 0.6105 | 0.4340 | 0.3601 | 0.3121 | | ANHP-bert-10-evt | 0.7263 | 0.6264 | 0.5629 | 0.5278 | | ANHP-text-emb | 0.7330 | 0.6233 | 0.5650 | 0.5401 | | ANHP-llama | 0.7631 | 0.6673 | 0.6004 | 0.5533 | | ANHP-G3.5(ours) | 0.7775 | 0.6868 | 0.6217 | 0.5775 | The takeaways remain the same: our framework is better than all baselines; GPT-3.5 is better than LLAMA2-chat-13B, which is better than others; etc. --- Reply to Comment 1.1.2: Title: New Discussion - More About Logical Reasoning Comment: [More About Logical Reasoning] > quite helpful and insightful. I strongly suggest adding it to the next draft. Thanks! We are glad that you like it! We will surely add it to the paper; as we said, we have already written the full paragraph. We plan to add it as a new section: 3.4 Relations with Formal Logical Reasoning, but may also move it to Related Work (since it will cite many papers, including what you've suggested). We post our current draft here; please let us know if it needs any improvements. <-- paragraph about our method vs. logical reasoning %%% to keep rebuttal concise, we omit references here Our proposed method is deeply connected with the research on formal logical reasoning. In formal logical reasoning, one will be given a theory---i.e., a set of facts and rules written in formal language (e.g., prolog)---and asked to determine the truth value of a goal---i.e., a logical statement which may be a fact. A typical method for this problem is backward chaining: it starts from the goal and works backward to determine the sequence of steps needed to prove the goal, by applying the rules and identifying relevant facts as preconditions. This method has been generalized to solving other AI problems, including reasoning and planning in natural language. In our problem of future event prediction, each proposal given by the base model is a goal, and the full history can be regarded as a theory that only contains facts, i.e., the events that have actually happened. Like backward chaining, our method aims to find out the preconditions (i.e., cause events) for each goal. The key innovation is: our method utilizes the abductive reasoning capability of LLMs---precisely, the ability to reason about explanations for an outcome---to propose possible causes, which it then pattern-matches against to retrieve the actual events from the history. As a result, our method doesn't need any explicit rules, but relies on the built-in knowledge of LLMs. Notably, unlike backward chaining, our method does not perform more than one step of reasoning. It is because we assume that the data is complete: i.e., all events that have happened are observed. In this case, a second step of reasoning is unnecessary since the direct causes are all observable, thus retrievable, and they are more recent in time than any indirect causes. When the data is incomplete, it will be necessary to perform multiple steps of reasoning, since the direct causes may not be observable and one has to find out indirect causes from the history. Handling incomplete data will be a non-trivial extension of our proposed method and we leave it to future work. --> --- Reply to Comment 1.1.3: Title: New Results - 10-Most-Recent-Same-X Comment: We finished new experiments and here come the new results! First, I need to correct my statement of "'same subject' inductive bias may be helpful for object prediction and predicate-object joint prediction". That is wrong: for object and predicate-object joint prediction, "same subject" will also retrieve the same set of evidence for all proposals, making it hard to learn for the reranking model. Sorry for this mistake. The "same X" inductive bias should help more when the retrievals are different across proposals. Such settings include: - predicate prediction: retrieve 10 most recent events with the same predicate; - object prediction: retrieve 10 most recent events with the same object. Now let's look at the new results in these settings. First, we recap the table for predicate prediction on GDELT data, adding a new baseline called ANHP-rec-pre-10-evt (i.e., retrieving 10 most recent events with the same predicate). As we can see, it outperforms "random retrieval" and other "most recent" designs for most values of M; but it is not as good as stronger baselines such as ANHP-bert-10-evt. | M | 2 | 3 | 4 | 5 | |---------------------|---------|--------|--------|--------| | ANHP | 0.7316 | 0.6234 | 0.5641 | 0.5397 | | ANHP-rnd-10-evt | 0.7185 | 0.6013 | 0.5250 | 0.4675 | | ANHP-rec-10-evt | 0.6078 | 0.4301 | 0.3534 | 0.3106 | | ANHP-rec-sub-10-evt | 0.6105 | 0.4340 | 0.3601 | 0.3121 | | ANHP-rec-pre-10-evt | 0.6716 | 0.6202 | 0.5340 | 0.5108 | | ANHP-bert-10-evt | 0.7263 | 0.6264 | 0.5629 | 0.5278 | | ANHP-text-emb | 0.7330 | 0.6233 | 0.5650 | 0.5401 | | ANHP-llama | 0.7631 | 0.6673 | 0.6004 | 0.5533 | | ANHP-G3.5(ours) | 0.7775 | 0.6868 | 0.6217 | 0.5775 | Then, we ran new experiments for object prediction on GDELT. For this setting, we tried ANHP-rec-obj-10-evt (i.e., retrieving 10 most recent events with the same object). As shown in the table below, ANHP-rec-obj-10-evt is better than "random retrieval" and other "most recent" designs, although it is significantly worse than our ANHP-G3.5. Similar to predicate prediction, the "same subject" design is worse than "random retrieval", although the gap is not as large as for predicate prediction (this task is harder, so performances are all lower, making the gap smaller). | M | 2 | 5 | 10 | 15 | 20 | |---------------------|--------|--------|--------|--------|--------| | ANHP | 0.5078 | 0.2214 | 0.1321 | 0.1047 | 0.0917 | | ANHP-rnd-10-evt | 0.5044 | 0.2096 | 0.1208 | 0.0854 | 0.0665 | | ANHP-rec-10-evt | 0.5001 | 0.2009 | 0.1001 | 0.0666 | 0.0501 | | ANHP-rec-sub-10-evt | 0.5002 | 0.2005 | 0.1013 | 0.0685 | 0.0522 | | ANHP-rec-obj-10-evt | 0.5117 | 0.2318 | 0.1502 | 0.1307 | 0.1260 | | ANHP-bert-10-evt | 0.5016 | 0.2385 | 0.1768 | 0.1651 | 0.1628 | | ANHP-text-emb | 0.5101 | 0.2290 | 0.1336 | 0.1041 | 0.0920 | | ANHP-llama | 0.5503 | 0.3287 | 0.2676 | 0.2644 | 0.2612 | | ANHP-G3.5 (ours) | 0.5568 | 0.3321 | 0.2789 | 0.2684 | 0.2664 | We hope that our new results have fully resolved your concerns. If you have any further questions, please let us know and we will do our best to address them.
Summary: This submission proposes a large language model (LLM-) based approach to enhancing event prediction methods. Instructed by a few annotated demonstrations, a large language model is used to suggest possible causes for a proposal to be predicted. Then a search module is used to find out previous events that match the suggested causes. Finally, a scoring function is exploited to evaluate whether the retrieved events could actually cause the proposal. The above three steps can be used to enhance an existing event prediction method. Experimental results on two benchmark datasets (Amazon Review and GDELT) demonstrate that the performance improvement is significant by comparing enhanced methods with baseline ones. Strengths: Summary: (1) A new idea for using LLMs to enhance event prediction is proposed. (2) There are some empirical results for justifying the proposed idea. Weaknesses: Summary: (1) The effectiveness of the proposed approach, when being applied to other datasets or different LLMs that are trained by other corpora, is doubt. (2) The detail about how the proposed approach is used to enhance a baseline method for event prediction is clear. (3) The description of the proposed methodology lacks of rigor, with some important notations undefined. (4) The proposed approach has some potential limitations not mentioned in the submission (see below in the Limitation part). Soundness: The prediction performance of the proposed approach depends on the quality of the suggested causes by LLMs. If the suggested causes differ a lot from any events in the experimental dataset, the proposed approach may not help. Thus, there is a doubt whether the prediction performance improvement gained by the proposed approach still exists for other datasets or different LLMs that are trained by other corpora. In other words, the proposed approach seems not always sound in terms of improving performance for event prediction. Presentation: Although the basic idea is presented clearly in the submission, the details are confusing in some important aspects. First of all, it is unclear how to enhance an existing event prediction method by the three steps (see comments in the Summary part). Should the existing event prediction method be treated as the scoring function in the last step? In other words, does an enhanced method only differs from the baseline one in retrieving relevant events to conduct a prediction? Secondly, the template for constructing prompts to an LLM is unclear. There are two templates (Listing 1 and Listing 2 in page 4) presented in the submission, which one is used to construct a prompt? Finally, some descriptions of the proposed approach are unclear. In Lines 89-90, what does the superscript (1), …, (M) mean? How is M defined? In Line 113, what does the superscript (m) mean? In Line 117, what does the superscript (m,M’) mean? How is M’ defined? In Line 170, what does the superscript (m) mean? How is M in Equation (5) defined? Contribution: The submission presents a new idea for utilizing LLMs to enhance event prediction. However, the soundness of this idea is questionable (see comments in the Soundness part). It calls for further theoretical analysis and empirical study to confirm the soundness. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: The questions have been raised in the weakness comments for Presentation. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors have not adequately addressed all limitations in the proposed approach. Some of them may not be addressed by the state-of-the-art techniques, but authors need to explicitly mention them in the submission. Firstly, the problem setting studied in the work restricts all considering event types to be in a predefined finite set. If the template for constructing a prompt to LLMs is something like Listing 1 in Page 4, this predefined set of event types should be rather small, otherwise the constructed prompt could be too long to feed into the LLM. Secondly, the names of event types should be meaningful, since they need to be fed into an LLM for generating their embeddings. Finally, the effectiveness of the proposed approach depends on whether the utilized LLM involves knowledge about the considering event types. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. We have added new results for you; please see [New Baselines] and [New Dataset]. Now we'll answer your remaining questions. > the soundness of this idea is questionable (see comments in the Soundness part). It calls for further theoretical analysis and empirical study to confirm the soundness. > effectiveness of the proposed approach, when being applied to other datasets or different LLMs that are trained by other corpora, is in doubt. > If the suggested causes differ a lot from any events in the experimental dataset, the proposed approach may not help… the proposed approach seems not always sound in terms of improving performance for event prediction. We reply to the "soundness" concern in [Scientific Rigor] of our to-all message. As for "further empirical study", we have managed to add new results for you and hope that you like them! Please see [New Baselines] and [New Dataset]. First, we experimented with open-source LLAMA2-chat-13B, which was very different from OpenAI's GPTs. Second, we evaluated a new ICEWS dataset, on which we tried both time and predicate prediction. At least for the new model and new data, our proposed framework still exhibits superior performance compared to baseline methods. Moreover, the LLMs used in our work are all trained with general online data, which has a broad coverage and does not specialize in the particular kinds of data/tasks that this paper is about. Therefore, it seems reasonable to expect that the success can generalize to other test datasets. > The proposed approach has some potential limitations not mentioned in the submission > the problem setting restricts all considering event types to be in a predefined finite set > the names of event types should be meaningful, since they need to be fed into an LLM > effectiveness of the proposed approach depends on whether the utilized LLM involves knowledge about the considering event types Thanks. We will include these limitations in the final version. However, we'd like to point out that the "finite" limitation comes from the base model part of our framework, but not our new LLM and ranking mechanism: nearly all the event sequence models assume a finite set of event types, so our framework still has a broad applicability. Generalization to infinite event types could be a very interesting future direction. > First of all, it is unclear how to enhance an existing event prediction method by the three steps (see comments in the Summary part). Should the existing event prediction method be treated as the scoring function in the last step? In other words, does an enhanced method only differs from the baseline one in retrieving relevant events to conduct a prediction? The 3-step framework is shown in Figure-1 of paper, and it is qualitatively described in line-33 to 41 in Introduction. In the final version, we will add a high-level technical introduction at the beginning of Section 3 to describe how the 3 steps work together, and clarify where the event sequence model is used: - 1st step: an even sequence model is used as a proposer to propose predictions on the future (time, type, or attribute of type). This is where the existing event prediction model is used. This is also why we call an event sequence model "base model": if we do not do step-2 and 3, this framework falls back to an ordinary event sequence model. - 2nd step: LLM reads each proposed prediction, and generates possible causes as if it actually happened. Then sBERT-based retriever finds out events that are similar/relevant to the generated causes from the history. See Section-3.1 and 3.2 in paper. - 3rd step: a reranking module examines the combos of (proposal, retrieved evidence) and reranks them. See Section-3.3 in paper. This module has to score sequences of time-stamped events, but it doesn't need to propose events. Therefore, its architecture is like an event sequence model on the input side (which may have confused you? we can clarify it in the final version!), but different on the output side. Technically, it is an energy function just like the ranking modules in non-autoregressive sequence models (e.g., HYPRO by Xue et al. 2022). > the template… is unclear… two templates (Listing 1 and Listing 2 in page 4)... which one is used to construct a prompt? Listing 1 is the general template that includes instructions and demonstrations. Listing 2 is an example of demonstration. Listing 2 is used in Listing 1. (In Listing 1, we wrote "// Examples are in Listing 2.") > description of the proposed methodology lacks of rigor, with some important notations undefined. > In Lines 89-90, what does the superscript (1), …, (M) mean? … In Line 113, what does the superscript (m) mean? In Line 117, what does the superscript (m,M’) mean? How is M’ defined? In Line 170, what does the superscript (m) mean? How is M in Equation (5) defined? We apologize for the confusion. We'll clarify the notation details in the final version. Throughout the paper, we use little letters (e.g., m) as indices and use capitalized letters (e.g., M) as the upper bounds of the indices. Specifically, M is the # of proposals and the variable m indexes the m-th proposal. Double index (m,m') shows up because each of the M time proposals corresponds to M' different type proposals at that time: i.e., (m,m') indexes the m'-th type proposal at the m-th time proposal. Line-50 to 64 in Section 2 explains the relation between time and type prediction in the setting of classical event sequence modeling. --- Rebuttal Comment 1.1: Title: Issue on limited number of involved event types Comment: Thanks for the detailed response, which addressed most of my main concerns. There is a question remained about the number of event types that can be used. Since the names and descriptions of all involved event types should be added to the prompt fed into the LLM, the number of event types is restricted not only by the original dataset but also by the limited size of a prompt to LLM. For example, a model that does not use LLM can deal with 1000 event types but when it integrates with an LLM, it can only construct a prompt to LLM with no more than 100 event types. Thus, how will the proposed approach enhance a basedline model which originally works well for thousands of event types, but the names and descriptions of all these event types cannot be fed into the LLM? --- Reply to Comment 1.1.1: Title: We handled a hundred millions of event types in our experiments. Comment: Thank you for your response! We thought your concern was the "finite" set of event types. If the concern is a "small" set of event types, we can reassure you that our approach can indeed scale up to a "large" set of event types. In our GDELT experiments, it already handled **a hundred millions of event types** (2279 subjects x 20 predicates x 2279 objects; see Line-243, Line-244, and Appendix C.1 of paper). Maybe what has confused you is our example in Listing-1? In Listing-1, our prompt includes "... predicates are restricted to the 20 options below: 1. MAKE STATEMENT…" First, we need to clarify terminologies - event type: In the field of event sequence modeling, an event type is typically defined as a combination of (subject, predicate, object) such as (biden, make statement, russia). A dataset such as GDELT and ICEWS typically has more than millions of possible event types since it has tens of predicates and thousands of subjects and objects. - predicate: In both NLP and event sequence modeling, a predicate means information about a subject or an action that subject takes. In the event-centric NLP community, people sometimes use "predicate" and "event type" interchangeably, because they tend to think an "predicate" is a "type of event" that can happen to a subject. In this paper and rebuttal, we follow the convention of event sequence modeling, treating "predicate" only as an attribute of "event type" but not using them interchangeably. From our understanding, you seem to think that "predicate" and "event type" are equal. So we feel eager to clarify this terminology. Second, our prompt never needs to mention the set of possible subjects or objects, but our approach works remarkably well on object prediction and predicate-object joint prediction; please see Fig-2(a), Fig-3(b), Fig-4(right), Fig-5(right), Fig-6(right), Fig-7(right), Fig-8(right), Fig-9(right) in main paper along with other figures in appendices. The 2279 subjects and 2279 objects (they are the same set) include a large diversity of political entities (orgs, persons, etc) across G20 countries, so it is really challenging to predict them. Third, we ran new experiments on GDELT, and found that our method still works very well if we delete this list from the prompt (i.e., no set of predicates is mentioned at all). We show the new predicate prediction results in the table below, where ANHP-G3.5-noprelist is our framework without the content of "predicates are restricted to the 20 options…" As you can see, changes of results are negligible. | M | 2 | 3 | 4 | 5 | |-|-|-|-|-| | ANHP-G3.5-noprelist | 0.7771 | 0.6878 | 0.6187 | 0.5744 | | ANHP-G3.5 (ours) | 0.7775 | 0.6868 | 0.6217 | 0.5775 | Fourth, we need to highlight: as stated in our response to E9bj, we didn't tune prompts. We designed prompt templates before running any experiments, and haven't changed it since then. We decided to keep the list of "20 options" in the prompt because they are overall short. We will discuss the new results (without this list) in the final version, but we prefer keeping it as the default design of our framework, considering that the full list is really not long. There are other reasons why it is not an issue to include the list of predicates in the prompts: - the set of predicates is usually small. GDELT is one of the largest event databases in the world, and their creators somehow found it sufficient to use only 20 predicates (more precisely, 20 CAMEO codes; ICEWS uses the same CAMEO codes). In event-centric NLP, people work with the "event trigger/type set" (i.e., "predicate set" in our terminology) of ACE 2005 Corpus, which defines 33 "event types" (i.e., "predicates" in our terminology). As you have noted, our method can easily handle <100 predicates in its prompts, which is more than enough for real-world large-scale applications. - LLMs keep evolving and their context windows will surely become larger. We expect that LLMs will be able to handle >100 predicates in the near future. We hope that this long message has resolved your concerns regarding "small set of event types". If you have any further concerns or questions, please let us know and we will be more than willing to discuss them.
Summary: The paper investigates the use of large language models (LLMs) to improve the accuracy of event sequence models. Specifically, the authors propose an abductive reasoning framework based on an LLM. First, the event model produces some proposals of predictions. Then, the LLM suggests some causes for each proposal based on annotated demonstrations. At this stage, a search module retrieves some events that match the causes and finally a scoring function gives the likelihood that the retrieved events could actually cause the proposal. Strengths: * The proposed method is sound and applicable in a wide range of practical settings * The experimental evaluation is solid, with several analyses and ablations that help the reader understand the contribution of different design choices * The paper is well written and easy to follow Weaknesses: * The results are not surprising, as the baselines do not rely on the same setting * The paper should mention the limitation that it requires textual data for the LLM Technical Quality: 3 good Clarity: 3 good Questions for Authors: None Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are discussed in the Appendix. The paper should mention the limitation that it requires textual data for the LLM. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Title: Response to Reviewer HaQW Thank you for your constructive feedback. We have added new results for you; please see [New Baselines] and [New Dataset]. Now we'll answer your remaining questions. > The results are not surprising, as the baselines do not rely on the same setting Do you mean that the baselines are only event sequence models, without any cause-retrieving and reranking mechanism? If yes, you may like our new results presented in [New Baselines]. Some new baseline methods we implemented follow the general propose-retrieve-rerank pipeline of our proposed framework, and they differ in how to retrieve causes. Can they be considered to "rely on the same setting"? As you can see, our full framework outperforms all the new baseline methods. In addition, we'd like to point out that our original baseline event sequence models are strong and difficult to outperform. Moreover, from an NLP perspective, whether LLM could help predict the future is an open question. Therefore, we honestly didn't know the improvements for sure before we actually carried out the experiments. If you still think that they are "not surprising" after reading this rebuttal, we would really appreciate that you elaborate it so we could kickstart a more in-depth discussion. Also, if we misunderstand what you mean by "same setting", please advise so that we could give a more informed response to this concern and further improve paper. > should mention the limitation that it requires textual data for the LLM. Sure. We will include it in the final version. --- Rebuttal 2: Comment: Thank you again for your review. Has our response addressed your concerns?
Rebuttal 1: Rebuttal: We thank reviewers for constructive feedback! In this to-all message, we clarify our technical contributions and present new results. We will address other concerns in responses to individual reviews. Due to 6000-char limit, we have to keep this response concise. If anyone asks for more details, we'd be happy to elaborate them in Discussion phase. [Contribution, Significance, and Impact] Event sequence modeling (i.e., learning to predict future events given the past) is an important area in machine learning. In recent years, over a thousand new papers on this topic have been posted on arXiv. Our paper makes a significant contribution to this area: we propose a simple modeling and inference framework, which leverages recent advances in NLP---LLMs and in-context learning---to improve prediction accuracies of traditional event sequence models. Our framework is compatible with any event sequence models. Our experiments show that it significantly and consistently improves over a range of strong and influential event sequence models. This finding is certainly of great interest to both event sequence modeling and NLP (particularly, LLM) research communities, which is a broad audience. In addition, we will release code to assist researchers in reproducing our results and conducting their future research. R.mHgh thinks that "a more worth-studying problem is how to use LLMs to more accurately predict future events in the first place". We agree it is "worth-studying", but it is orthogonal to our contribution. Moreover, "which is more worth-studying" is subjective: in history of science, many problems were regarded as "not important", but some of them turned out to be extremely important later in time. [Scientific Rigor] R.Hwvw is concerned with scientific rigor and soundness of this paper, and asks for more "theoretical analysis and empirical study". Please see [New Baselines] and [New Dataset] for new results and analysis. As for "theoretical analysis", we'd like to emphasize: some papers are theoretically rigorous; some are empirically rigorous. This paper is empirically rigorous: we carefully thought through the framework design, evaluated against strong and proper baselines, and conducted extensive empirical studies. Our method has no theoretical guarantees, but this limitation is shared by many great empirical papers and should not be a reason for rejection. Surely, there could be settings where our framework is no better than baselines: e.g., LLMs provide irrelevant, incorrect, or misleading information. We'll discuss such cases in the Limitations section. [About Experiment Design] R.mHgh is concerned with our experiment design. For a fair comparison with previous work, this paper follows standard experiment setup: - evaluate on predicting time and type of future events, given the history; - only predict an attribute of type (e.g., predicate) when there are too many types. GDELT has over 20 millions of event types (in format of subject-predicate-object). First, full type prediction is extremely hard and all methods will end up with indistinguishably low accuracies. Second, in real applications, users usually care about "what will happen between entity A and entity B" (predicate prediction) and "what will A do to B" (predicate-object joint prediction), instead of "which of 20M possible events is most probable" (full type prediction). Therefore, previous work using GDELT also evaluates on attribute prediction. [New Baselines] We ran experiments with 7 new baselines. They include: 3 methods that follow the pipeline of our framework but differ in how to retrieve evidence; a new version of ours that uses LLAMA2-chat-13B; a new event sequence model that uses text information; a method that directly uses LLMs for event prediction; a new version of ours that retrieves a past event only if its retrieval score is higher than a threshold. Results are in Fig-16 and 17 of submitted pdf. Takeaways are: - our GPT-3.5-based framework outperforms all the new baselines. - LLAMA2-chat-13B version of our framework is better than GPT-3 (175B) version, indicating that RLHF may be important for our problem setting. This finding is interesting and worth studying in the future. - directly using LLMs for future prediction performs worse than base ANHP model. - there is no much difference between "retrieve 10 highest-scored events" and "retrieve if it passes threshold". [New Dataset] We ran experiments on ICEWS, a dataset similar to GDELT but less dense in time (so that time prediction is more meaningful). Results are in Fig-18 of submitted pdf: like on GDELT and Amazon Review, our LLM-enhanced framework significantly outperforms baselines on ICEWS. [Qualitative Results] LLM-generated causes look reasonable and meaningful. We'll show them in the final version. [Limitations] Our Limitations section is in appendices. In final version, we'll discuss the additional limitations pointed out by reviewers. We already wrote a new version, but couldn't share it here due to char limit. If needed, we can show it in Discussion phase. However, we kindly request reviewers to be mindful of our existing contributions and significance. After all, what hasn't been done should not undermine the value and significance of what has already been done. [Presentation Improvements] We will include new results in final version. In addition, we have planned a list of presentation improvements, including ways to fix issues pointed out by reviewers (e.g., notation definition, formula references). With extra page allowed by NeurIPS camera-ready, the fixes are easy to execute. Considering that we have managed to deliver many new results within a week, we hope that reviewers feel assured of our promise and commitment to complete the proposed improvements. If needed, we'd like to show our full list of presentation improvements in Discussion phase, for reviewers to check and approve. Pdf: /pdf/3c71d34bfd4bfc2d90afe5fc7ada082989632fe7.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: Summary: This paper investigates the effectiveness of large language models in reasoning and predicting event sequences. The authors propose a general framework that combines event sequence models with large language models for the task of event prediction. In this framework, an event sequence model proposes predictions on future events given the past, while a large language model assists by generating possible causes for each proposal. Experiments on two challenging real-world datasets (Amazon Review and GDELT) have shown that this framework significantly outperforms state-of-the-art event sequence models. Contributions: 1. Proposes a domain-agnostic framework that combines event sequence models with large language models for the task of event prediction 2. Demonstrates the framework's ability to significantly outperform state-of-the-art event sequence models on two challenging real-world datasets (Amazon Review and GDELT). 3. Conducts extensive ablation studies to test the framework's robustness to prompt design and hyperparameters. Strengths: 1. The idea of using abductive reasoning to improve event prediction is really a natural fit, interesting, and novel. The authors have made such a nice connection between two NLP problems. The framework is also pretty general. 2. Performance: The experimental results show that the proposed framework outperforms state-of-the-art event sequence models on two datasets, and the amount of improvement seems large from the chart. 3. The paper is well written and easy to understand. Weaknesses: * The literature review is a bit limited. For example, the paper claims "We are the first—to the best of our knowledge—to integrate large language models into temporal modeling." But this is not true, as there are already works that apply LLMs to temporal reasoning. For example, please see paper "Generic Temporal Reasoning with Differential Analysis and Explanation (https://arxiv.org/pdf/2212.10467.pdf)" and other papers from Dan Roth's group. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The result presentation is a bit hard to read, can tables be presented instead of charts? It's really hard to read especially at the regions where methods are close to each other. 2. In figure 4, the shapes are really too close to each other that it becomes difficult for the readers to read the figures. Either make it bigger or differentiate them in other ways. 3. Its really surprising that gpt3.5 is so much better than davinci-003, given the general belief of reasoning abilities of these models are gpt4 > text-davinci-003 > gpt3.5. Do the authors have an interpretation? Was the prompt tuning done on the gpt3.5 and later used for davinci? 4. Can the authors do some qualitative analysis on the generated event causes? What happens if the generated event causes for all event predictions are not included in past events? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors did not discuss limitations of this work. I mainly see two limitations: 1. The use of closed-source model limits the applicability of this approach. 2. It requires manually writing prompts for specific tasks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback and being supportive! We have added new results to resolve your concerns; please see [New Baselines] and [New Dataset]. Now we'll answer your remaining questions. > The literature review is a bit limited… There are already works that apply LLMs to temporal reasoning… Thanks for reference! We apologize for our misleading claim. What we meant is: we are the first to apply LLMs to the area of event sequence modeling (specifically, the community discussed in [Contribution, Significance, and Impact], which I meant by "temporal modeling"); or more precisely, we propose the first framework that leverages LLMs to enhance event sequence models. We are well aware of the line of "temporal reasoning" work in NLP. We apologize for not discussing it in the submitted version. In final version, we will add a paragraph discussing the similarities and differences between our work and the "temporal reasoning" NLP work. This line of NLP work includes many papers (e.g., Feng et al. 2022 as you suggest) from NLP groups led by Dan Roth, Heng Ji, Kathleen McKeown, Muhao Chen, Qiang Ning, etc. > can tables be presented instead of charts? ... In figure 4, the shapes are really too close… Definitely! We'll keep figures in the main paper, but add the actual numbers in appendices. We will also make the curves and markers more distinguishable, by using larger sizes and different shapes. > really surprising that gpt3.5 is so much better than davinci-003… Do the authors have an interpretation? Was the prompt tuning done on the gpt3.5 and later used for davinci? We found it surprising, too. Based on our new results ([New Baselines]), it seems that RLHF is important for this problem setting: when using LLAMA2-chat-13B, our framework works competitive to the GPT-3.5 version, and significantly better than the GPT-3 version. It is unclear how RLHF has helped this particular problem, which seems to be a good future research topic. Moreover, we didn't tune prompts: we designed the prompt templates before running any experiments, and haven't changed it since then. Over the past week, we tried a few different prompt templates for each dataset, and found that it doesn't change the ranking of the methods (i.e., GPT-3.5 > LLAMA2-chat-13B > GPT-3). We'll add this new analysis in the final version. > What happens if the generated event causes for all event predictions are not included in past events? For each generated cause, the most relevant (measured by sBERT embedding similarity) will be retrieved; see Line-133. It is the duty of the ranking module to analyze each (proposal, retrieved evidence) combination and assign low scores to implausible combinations. In [New Baselines], we show results of "only retrieve events if its retrieval score is higher than threshold", but its performance is not very different from "always retrieve 10 highest-scored events, no matter how low they are". > ​​did not discuss limitations… the use of closed-source model limits the applicability… requires manually writing prompts Limitations section is in appendices. And we will add your points in the final version. In response to your point of "close-source LLMs", we add new results using open-source LLAMA2-chat-13B; please see [New Baselines]. > Can the authors do some qualitative analysis We'll add them in the final version. See [Qualitative Results]. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: The author response has addressed my confusion, and I would like to see this paper accepted. Event-based reasoning has remained as a great challenge in NLP, and this paper provides a really simple and effective technique to improve the performance. --- Reply to Comment 1.1.1: Title: Thank you for your support! Comment: Thank you very much for your support! And we really appreciate your acknowledgement that event-based reasoning is a great challenge in NLP. We have been improving the paper presentation based on all the reviews, and we will surely deliver a strong and high-quality camera-ready to NeurIPS. Later in the Discussion phase, we will post our full list of presentation improvements. (We will hold it until then in case that any reviewer requests anything new.)
null
null
null
null
null
null
Drift doesn't Matter: Dynamic Decomposition with Diffusion Reconstruction for Unstable Multivariate Time Series Anomaly Detection
Accept (poster)
Summary: The paper introduces D3R, a novel anomaly detection network for unstable multivariate time series data. D3R addresses the issue of drift by dynamically decomposing the data and reconstructing it using noise diffusion. Experimental results show that D3R outperforms existing methods, with a 12% average relative improvement. The proposed approach has implications for other tasks involving long-period multivariate time series analysis and can be applied to anomaly detection in different types of data. Overall, the paper presents an innovative solution to improve anomaly detection in unstable time series data. Strengths: 1. Novel Approach: The paper proposes a novel anomaly detection network, D3R, specifically designed for unstable multivariate time series data. Incorporating dynamic decomposition, noise diffusion, and end-to-end training sets it apart from existing methods. 2. Addressing Drift: The paper addresses the drift generated from non-stationary environments, often overlooked in previous works. D3R aims to reduce false alarms and improve anomaly detection accuracy by tackling drift through decomposition and reconstruction. 3. Experimental Results: The paper presents extensive experiments on real-world datasets, demonstrating that D3R outperforms existing methods with a notable 12% average relative improvement. This provides strong empirical evidence for the effectiveness of the proposed approach. Weaknesses: The time complexity is not discussed. As real-world time series datasets can be large-scale and complex, it is important to address scalability and computational complexity. Technical Quality: 3 good Clarity: 3 good Questions for Authors: NA Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The time complexity is not discussed. As real-world time series datasets can be large-scale and complex, it is important to address scalability and computational complexity. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive comments and insightful suggestions. Please find our response below. **Q1: Experiments on computational cost.** As three reviewers have posed a similar question, we provide a consistent response in **Q1** of the "global" response. Thanks. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed reply. It solves my concerns. I keep my score.
Summary: The authors propose a new model called D3R (Decomposition with Diffusion Reconstruction) for anomaly detection in multivariate time series data. The proposed model applies a dynamic decomposition method to separate the stable components and trends in time series data. This method can effectively separate long-period stable components as well. Additionally, it utilizes a noise diffusion model to control the information bottleneck from an external perspective. Through this approach, the proposed model allows for controlling the data restoration performance, which varies for each dataset, from the outside without iterative exploration processes. Strengths: - The authors clearly defined the problem they wanted to solve in time series anomaly detection and demonstrated instances of the phenomenon occurring in real data. The proposed method is experimentally well-supported in its ability to solve the given problem. - The proposed data-time mix-attention and the dynamic decomposition method are ingenious. Weaknesses: - The proposed method sets the ground truth for the trend as a moving average (line 106), which seems that it may still not be free from the local window issue (Challenge 1, line 33) in long-term component decomposition. - In the authors' comparative experiments (Table 2), all comparison models used default hyperparameters (Appendix C.2.), while the proposed model explored combinations that yielded the best performance for each dataset (Appendix C.3.). Although the proposed method appears to be robust to hyperparameter variations, such comparisons may not be fair. It would be more appropriate to apply optimal hyperparameters to the comparison models or apply a single hyperparameter for all data in the proposed model, and then compare the results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Using a box plot to show the results in Figure 4(b) is not appropriate. Each hyperparameter decreases to one-fifth of the standard value and can increase up to twice the standard value, varying according to the values determined by the authors. It would be better to utilize Figure 2 in Appendix F instead. - In Table 1, what do the terms 'Series Number' and 'Attacks Number' refer to? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - The authors mentioned in Limitations section that the proposed model has a large computational cost (Appendix line 114). It would be beneficial if they provide specific information on the amount of GPU-hours required for the experiments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive comments and insightful suggestions. Please find our response below. **Q1: Details of dynamic decomposition.** As two reviewers have question about why our method breaks the limitations of the local window, even though we also used the moving average to obtain the trend component, we provide a consistent response in **Q2** of the "global" response. Thanks. **Q2: Misunderstanding of hyperparameter setting.** Apologies for the confusion in the Appendix C.3. Your concern pertains to the boundary of added drift in the disturbance strategy (line 56 in the Appendix) and the max offset of offset subtraction (line 57 in the Appendix). The original text actually describes the specific way in which the hyperparameters were set, i.e., the grid search, and its search range. In fact, we do use a single hyperparameter for all datasets, as evident in our submitted code (line 33, 34 in main.py). Specifically, the boundary of added drift in the disturbance strategy is set at 10, and the max offset of offset subtraction is set to 30. **Q3: Reasons for using box plots in Figure 4(b).** Thanks for your suggestions. Figure 4(b) in the paper is designed to demonstrate the range of variation in the performance of the diffusion reconstruction module and the VAE module, using the same scaling range of hyperparameters. The purpose is to verify the robustness of our design. In comparison to the line plot (Figure 2 in the Appendix), which effectively demonstrates the trend of changes, the box plot is better suited for intuitively illustrating the distribution of the data. **Q4: Explanations of terms in Table 1.** "Series Number" refers to the number of variables ($k$ in line 85 in the paper) in the multivariate time series. "Attacks Number" refers to the number of anomaly segments present in the test set. In the real world, anomalies are usually continuous and appear in the form of segments. For instance, in the PSM dataset, the test set consists of data $\mathbf{X} \in \mathbb{R}^{87481 \times 25}$, where 87481 represents the length of timestamps, and 25 represents the number of variables. Among the 87481 timestamps, there are 73 anomalies. The anomalies vary in duration, with the shortest anomaly lasting for only 1 timestamp, and the longest anomaly spanning across 8861 timestamps. | | Testing Size | Series Number | Attacks Number | Anomaly Durations | | :--: | :----------: | :-----------: | :------------: | :---------------: | | PSM | 87481 | 25 | 73 | 1$\sim$8861 | **Q5: Experiments on computational cost.** As three reviewers have posed a similar question, we provide a consistent response in **Q1** of the "global" response. Thanks.
Summary: To overcome the temporal drift issues in unstable time series data, this work proposes an anomaly detection method, $D^3R$. By considering the temporal continuity of series and relieving the constraints of information bottleneck, $D^3R$ realizes the dynamic decomposition and the noise-diffusion-based series reconstruction. Extensive experiments on real-world time series datasets demonstrate the superiority of $D^3R$ w.r.t. anomaly detection. Strengths: 1. This work is well motivated by tackling the overlooked issues in existing time series anomaly detection works. 2. The proposed $D^3R$ approach employs decomposition and reconstruction for time series anomaly detection, which is technically sound. 3. This draft is well organized, and the presentation is clear. Weaknesses: 1. Time series anomaly detection is a well-studied problem. More datasets are expected to conduct experiments to validate the proposed method's effectiveness, such as the Yahoo dataset (Webscope dataset ydata-labeled-time-series-anomalies-v1 0, 2015). Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. As shown in experiments, the performance improvement of $D^3R$ on PSM is less impressive than other datasets. You highlight that $D^3R$'s performance improvement is more significant on the time series data with high nonstationarity. Does this mean that the application of $D^3R$ might be limited to the highly unstable time series data? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Potential applicability issues should be elaborated more. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive comments and insightful suggestions. Please find our response below. **Q1: Reason for the dataset selection.** Thanks for your suggestions. The datasets utilized in our research encompasses both server (PSM, SMD) and water treatment (SWaT) scenarios. Additionally, stable (PSM) and unstable (SMD, SWaT) conditions are included. These datasets can already offer a comprehensive validation of our model's superiority. While the Yahoo dataset is acknowledged as a classic anomaly detection dataset, it consists solely of univariate time series, which may not exactly match with our scenario. **Q2: Applicability of the model.** Based on the experimental results (Table 2 in the paper), our model demonstrates more substantial improvements when applied to unstable time series data. However, it is equally well-suited for stable data, such as PSM, where we achieve the best performance despite not observing a significant improvement. Technically, stable data undergoes the dynamic decomposition module, producing a predicted trend component $\hat{\mathbf{T}}_\text{d}$ that approximates a constant. This component does not adversely affect the subsequent diffusion reconstruction module, as it still allows the module to robustly control the information bottleneck from external sources and effectively leverage our model's strengths.
Summary: The paper presents a Transformer-based model called Dynamic Decomposition with Diffusion Reconstruction for Anomaly Detection in unstable multivariate time series. The authors addressed two challenges: the limitation of decomposition for long-period time series and high training cost for adjusting the information bottleneck in the reconstruction procedure. They introduced a novel dynamic decomposition method and a noise diffusion method to enable effective utilization of external information, overcome the limitations of local sliding windows, and avoid the high training cost associated with adjusting the information bottleneck. The proposed model achieves state-of-the-art results on different real-world datasets and outperforms baselines significantly on unstable datasets. Strengths: 1. the authors presenting a clear exposition of their research motivation and proposed solution. 2. The authors' methods effectively tackle two crucial issues: limitations of decomposition for long-period time series and high training cost of adjusting the information bottleneck. 3. The authors conducted experiments on diverse datasets, and their proposed method demonstrated excellent performance. Weaknesses: 1. The authors used the mean of five runs as the final results in their experiments but did not provide the standard deviation of the results. 2. While the model achieved the best F-score performance due to higher recall values, its precision values were comparatively lower than other models. The authors should explain this observation. 3. Further explanations for Figure 1 are recommended to clarify why other methods failed in this specific scenario. Technical Quality: 3 good Clarity: 3 good Questions for Authors: see weaknesses Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive comments and insightful suggestions. Please find our response below. **Q1: Standard deviation of the experimental results.** Due to spatial limitations, we exclusively present the mean of the results from the five runs in the paper, as it offers a more representative depiction of the method's performance. The detailed version of the primary experimental results (Table 2 in the paper) is available in the "global" response (Table 1 in the attached PDF). **Q2: Precision values of the model are comparatively lower than others.** The primary reason for this phenomenon is that our selection criterion for the grid search of the SPOT parameters is based on the F1 score (line 52 in the Appendix). In other words, we prioritize parameters that yield the highest F1 score. Precision and recall are significantly impacted by the threshold in unsupervised anomaly detection method, where a high threshold results in relatively high precision, and a low threshold leads to relatively high recall. Therefore, we primarily focus on the comprehensive performance of the model, i.e., the F1 score. Additionally, to mitigate the influence of SPOT parameters and thresholds, we also employ AUC to evaluate the raw anomaly scores (Table 2 in the Appendix), and our model demonstrates superior performance in this evaluation as well. **Q3: Further explanation of Figure 1.** The core of the unsupervised anomaly detection method is to acquire knowledge of normal temporal patterns from the training data. Moments in the test data that significantly deviate from these established patterns are classified as anomalies. As the data is collected from a non-stationary real-world environment, the temporal patterns may drift over time. The flat drift is not typically an anomaly. Previous methods have often overlooked this aspect, resulting in misclassifying moments with drift as anomalies. For these methods, moments with drift indeed deviates from the established incomplete temporal patterns.
Rebuttal 1: Rebuttal: **Q1: Experiments on computational cost.** As real-world time series datasets can be large-scale and complex, we supplement the measures of training time, inference time, and model size for deep learning-based models on the SMD dataset. The experimental results are presented in the subsequent table. Statistically, the training time for these models remains below 10 minutes, which is acceptable for real-world deployment and maintenance. The utilization of attention and its variants in the backbone network of our model leads to its larger size compared to previous algorithms. Thanks to the highly parallelized nature of the attention mechanism, both the training time and inference time of our model remain competitive. Additionally, it is worth mentioning that there has been substantial recent work [1] on transformer linearization, which may aid in reducing the burden of our model. We plan to explore this aspect in our future research. | Method | Training Time (s) | Inference Time (s) | Model Size (MB) | | :-----------------: | :---------------: | :----------------: | :-------------: | | VAE | 157.91 | 30.90 | 0.02 | | DeepSVDD | **60.70** | **12.61** | **0.01** | | LSTM-AE | 283.61 | 72.82 | 0.04 | | MTAD-GAT | 188.52 | 60.23 | 1.20 | | Anomaly Transformer | 422.43 | 94.36 | 28.15 | | TFAD | 315.39 | 38.54 | 1.04 | | Ours | 399.32 | 104.12 | 109.35 | **Q2: Details of dynamic decomposition.** The moving average method is utilized **solely in the data preprocessing stage during training** to extract the trend and stable components. At this stage, the training data has not undergone slicing based on sliding windows, thus avoiding any limitations of local sliding window. The specific decomposition algorithm employed at this stage is not a concern, as our primary objective is to generate a labeled stable component for model training. We can also use other more advanced decomposition algorithms, such as STL [2] or HP filter [3], to obtain more precise label. With a labeled stable component, our model can start training. Essentially, the core of the dynamic decomposition module is **learning a mapping function from timestamps to the stable component**. During training, the model learns this mapping function iteratively. During testing, it can **directly map the current input series to its stable component with this function**. It is worth noting that we no longer require the labeled stable component during testing. With the help of mapping rather than moving averages within localized windows, our model allows for precise decomposition of time series within smaller sliding window (or even single point). **References:** [1] Haixu Wu, et al. “Flowformer: Linearizing transformers with conservation flows.” In *ICML*, 2022. [2] Cleveland Robert, et al. “STL: A seasonal-trend decomposition procedure based on loess.” In *Journal of official statistics*, 1990. [3] Robert J Hodrick, et al. “Postwar US business cycles: an empirical investigation.” In *Journal of Money, credit, and Banking*, 1997. Pdf: /pdf/c40129b7fe7f9573ff526e33e96a4115417fb103.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper tackles the problem that existing works omit the drift generated from non-stationary environments by focusing on stable data, which may lead to numerous false alarms. As a solution, they propose an end-to-end anomaly detection network for real-world unstable data, named Dynamic Decomposition with Diffusion Reconstruction (D3R). In the decomposition stage, they dynamically decompose long-period multivariate time series by utilizing data-time mix-attention to overcome the limitation of local sliding window. In the reconstruction stage, they control the information bottleneck extenerlly by noise diction and directly reconstruct the polluted data. They evaluate on three real-world datasets (PSN, SMD, SWaT), and achieve the best performance compared to existing unsupervised anomaly detection methods. Strengths: Well-motivated and soundness: The problem that existing methods overlook the anomaly score in unstable data. Their proposed dynamic decomposition for long-period multivariate time series and diffusion reconstruction for controlling information bottleneck is sound. Ablation study and Analysis: In sections 4.3 and 4.4, they analyze their proposed dynamic decomposition module and diffusion reconstruction module. This helps to understand the effect of the proposed modules. Well-structured paper: Their paper is well-organized and easy to read. Weaknesses: Computational cost: there is no comparison of computational cost (inference time, # of params, Gflops ..) compared to existing models. For check the efficiency, they should report the comparison of computational cost. Minor comments: Repeated citation [11] and [12] of references Technical Quality: 3 good Clarity: 3 good Questions for Authors: . Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: . Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive comments and insightful suggestions. Please find our response below. **Q1: Experiments on computational cost.** As three reviewers have posed a similar question, we provide a consistent response in **Q1** of the "global" response. Thanks. **Q2: Repetitive citation in references.** Thanks for pointing out our error, we will correct it in the revised paper. --- Rebuttal Comment 1.1: Comment: In my opinion, it is important for this proposed Dynamic Decomposition with Diffusion Reconstruction method to further optimize the inference time and model size for practicality. It would be nice if the authors explicitly showed an alternative or concrete way to do this in their rebuttal. Nevertheless, after carefully reading other reviewers' reviews and the rebuttal, I maintain my original score. --- Reply to Comment 1.1.1: Comment: Thank you again for your valuable suggestion. In response to your concerns, we would like to make two points of explanation again. Firstly, we consider that the inference time and model size of our proposed model render it entirely practical for real-world application. The inference process for **16 days** of data (Table 1 in the Paper) necessitates a mere **104.12 seconds** (**Q1** in the "global" response), thereby satisfying the criteria for online, real-time detection. In comparison to the substantial enhancement (12%) in detection accuracy that we have achieved, the model's size of 109.35MB remains affordable within the context of the expanding hardware resources of the present era. Secondly, it is challenging to implement model lightweighting in a concrete way during the time-critical rebuttal. It is imperative to ensure that the model's accuracy does not undergo a significant decline following the lightweighting process. We are of the opinion that this could potentially evolve into a new, long-term work.
Summary: Current unsupervised methods for multivariate time series anomaly detection often overlook drift from non-stationary environments, leading to false alarms. To address this, this paper presents Dynamic Decomposition with Diffusion Reconstruction (D3R), a new anomaly detection network for unstable data. D3R decomposes and reconstructs drift, using data-time mix-attention for dynamic decomposition and noise diffusion to manage the information bottleneck in reconstruction. This end-to-end trainable model outperforms existing methods, showing a 12% average improvement over previous top-performing models. Strengths: 1. The outcomes of the experiments seem to be promising. 2. The problem addressed is intriguing and holds substantial value. Weaknesses: 1. The challenge suggested by the authors is debatable. They assert that "classical decomposition algorithms cannot be applied to data with a period larger than the size of the sliding window." In my view, this problem seems relatively simple to tackle by merely extending the length of the sliding window. Additionally, the authors seem to continue to depend on the moving average method to ascertain the trend and the labeled stable components. Therefore, they need to clarify how their technique specifically addresses this problem. 2. The rationale for creating dynamic decomposition remains vague. The sole beneficial intermediate outcome yielded by dynamic decomposition is T_d, recognized as the predicted trend component. Nevertheless, it seems that there is a more direct and feasible approach to extract the trend component, much like the authors have done during the data preprocessing phase. The necessity of introducing this intricate trend component, T_d, in place of using the standard trend component, demands clarification. 3. The concern of dynamically modifying the information bottleneck doesn't seem to be a common occurrence in real-life scenarios. I struggle to envision a situation where it would be necessary to adjust the information bottleneck during the inference phase. The authors need to elaborate on the practical value of employing the diffusion model if they aim to emphasize their contribution to reducing the cost of modifying the information bottleneck. Furthermore, offering a principle to guide the dynamic setting of these hyperparameters could make their method more persuasive. 4. No experiments have been conducted to substantiate their method's superiority concerning reducing the high training cost associated with adjusting the information bottleneck. 5. There is insufficient engagement with previous research. The ensuing paper also examines the impact of the trend component in anomaly detection, but the authors seemingly fail to appropriately cite this paper or compare their method with it. [1] Zhang, Chaoli, et al. "TFAD: A decomposition time series anomaly detection architecture with time-frequency analysis." Proceedings of the 31st ACM International Conference on Information & Knowledge Management. 2022. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: The authors can refer to the weakness listed above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors do acknowledge some minor limitations in their work. However, the primary limitation that is yet to be addressed pertains to the types of anomaly patterns their model cannot detect. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments. We will answer the questions one by one. **Q1: Necessity and details of dynamic decomposition.** **Q1.1: Why not over extend the length of the sliding window?** Below, we shall expound the reasonableness of Challenge 1 (Line 29 in the paper). Expanding the length of the sliding window excessively has constrained its adaptability due to the expensive computational resources and huge memory consumption. Taking the SMD dataset with a period of 1440 as an example: - Expensive computational resources: extending the length of the sliding window results in a corresponding explosion of model parameters. Many previous algorithms, such as MTAD-GAT [1], utilize a sliding window of 128 or less. If we extend the length to 1440 or larger, the model parameters of the input layer alone increase by a factor of 10x. - Huge memory consumption: when the model is deployed online in the real world, the incoming data stream must be cached for a full sliding window before being fed into the model. This results in a 10x increase in memory consumption. **Q1.2: Why we can break the limitations of local window?** As two reviewers have question about this question, we provide a consistent response in **Q2** of the "global" response. Thanks. **Q2: Necessity of the predicted trend component.** The core of this question is the same as **Q1**. Obtaining the trend component directly using the moving average, as done in the data preprocessing phase, is more direct. However, the method necessitates a sliding window length several times larger than the period. Otherwise, the "standard trend component" merely represents local smoothing of the original series, lacking genuine trend extraction. In the case of long-period time series, employing such a large sliding window presents many problems. **Q3: Necessity of dynamically modifying information bottleneck and the principle of hyperparameters setting.** Below, we shall expound the reasonableness of Challenge 2 (Line 34 in the paper). Firstly, we express regret for confusion arising from our imprecise statement (line 52 in the paper). The term "inference" would be more appropriately substituted with "revision". In truth, the requirement to adjust information bottlenecks not only exists but is also common in real world: - Training: when deploying the anomaly detection model on a novel scenario, guaranteeing the availability of the initial information bottleneck size proves challenging. Consequently, it becomes necessary to make multiple hyperparameter adjustments to revise the model. - Inference: in a real industry environment, temporal patterns are likely to change over time. The information bottleneck size set during training may not always yield satisfactory performance during inference. Hence, it becomes necessary to adjust it to revise the model. Unlike previous methods that require retraining the model when adjusting the information bottleneck, our approach can flexibly modify the information bottleneck without the need for retraining. Consequently, our method substantially reduces the training cost associated with information bottleneck modifications, even approaching close to zero cost. Regarding the principles of setting hyperparameters, a thorough analysis and discussion have already provided in Appendix F. **Q4: Verify the superiority of the model in reducing training cost.** Our algorithm inherently possesses the capacity to reduce the high training cost associated with adjusting the information bottleneck. This superiority stems from the fundamental design logic of our method. Specifically, in contrast to previous methods, our approach can flexibly adjust the bottleneck without the need for retraining. In other words, our method incurs close to zero cost solely for adjusting information bottleneck. Furthermore, as shown in Figure 4(b) in the paper, our algorithm exhibits superior performance and greater insensitivity to parameters. This advantage is evident as our algorithm achieves excellent performance with fewer adjustments. The negligible cost of a single adjustment and the limited number of adjustments required both substantiate the superiority of our design. **Q5: Insufficient investigation of previous research.** Apologies for not being able to provide a comprehensive comparison of previous research. In the revised version, we will include relevant analyses about TFAD [3]. TFAD notes the significance of trend in anomaly detection. However, for series decomposition, they employ the method based on HP filter during both training and inference. This static method lacks applicability for real-world scenarios where data is updated in real-time. Furthermore, they use the distance between the context window and suspect window as the anomaly score. This method heavily relies on the assumption that the context window must be normal, which is challenging to ensure in complex real world. We conduct comparison experiments between our method and TFAD using the official open-source code. The experimental results are summarized in the table below. The detailed version is available in the "global" response (Table 1 in the attached PDF). As analyzed in the previous paragraph, our method achieves the best experimental results. | Method | PSM (F1) | SMD (F1) | SWaT (F1) | Average (F1) | | :----: | :--------: | :--------: | :--------: | :----------: | | TFAD | 0.7520 | 0.7149 | 0.6953 | 0.7207 | | Ours | **0.7609** | **0.8682** | **0.7812** | **0.8034** | **References:** [1] Hang Zhao, et al. “Multivariate time-series anomaly detection via graph attention network.”, In *ICDM*, 2020. [2] Chaoli Zhang, et al. “TFAD: A decomposition time series anomaly detection architecture with time-frequency analysis.”, In *CIKM*, 2022.
null
null
null
null
On Adversarial Training without Perturbing all Examples
Reject
Summary: The authors propose a new approach called Subset Adversarial Training (SAT), which differs from traditional adversarial training methods that generate adversarial examples on the whole training set. Instead, SAT applies adversarial training on a subset of the training data. They studied two variants of subset adversarial training (CSAT and ESAT). They found that robust training in one class could generalize to other classes that were not adversarially trained, which is surprising. The also found that ESAT, where they adversarially trained on harder examples, gives surprising boost to downstream robust performance with much less data. The paper also discusses the concept of loss balancing, which is used to counteract an imbalance between the adversarial subset and non-adversarial subset when the training split is not even. The authors found that loss balancing is important for the adversarial robustness transfer observed. In conclusion, the paper presents a novel approach to adversarial training that could help us better understand the underlying mechanism of robust learning as well as having potential implication to more efficient adversarial training. Strengths: - The setting of experiment is interesting. It is surprising that adversarially training on a single class yields adversarial robustness to other classes. The originality of the experiment is strong. - The experiments has demonstrated possibility to decrease the cost of adversarial training. - The paper is also very clear with thorough experiments and analysis Weaknesses: - Even though the finding is interesting, I think the paper could do more in terms of understanding its implication to robust generalization. What does robust generalization to other classes imply or show about the process of adversarial learning? - While the experiment demonstrate possibility of decreasing cost of adversarial training, it doesn’t demonstrate this in more challenging scenarios. I understand that the paper’s intention is to understand how adversarial generalization happen as opposed to achieving the best performance, but if it is the paper’s intention to gain further understanding of the adversarial training process, I am hoping for more analysis and comments about its implication to robust generalization. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: One thing that I would like to see a little bit more of is analysis and discussion of the implication of the findings. For example, the authors have observed that the difficulty of a class is the main driver for its robust transferability to other classes. Could the author for example construct a synthetic 11th class on CIFAR-10 of varying difficulty that is completely unrelated to CIFAR-10, and see whether adversarial training on only this 11th class can give robust generalization to the original 10 classes. If the difficulty is increased, what is the extent of the robust generalization? I am a little bit unsure as to how the 11th class could be constructed, but this is an example of experiment where I would like to see the authors do by digging deeper into the possible implications. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: The authors have adequately addressed the limitation of the potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - **Q1: No comments on improving understanding of AT.** While we agree that further insights into robustness transfer would be beneficial, we believe our submission remains of strong interest to the scientific community (note the reviewers comment on our suprising findings (R1, R2, R4, R5). That is, our original experimental setup (R5) may act as basis for further research and lead to interesting findings down the road. - **Q2: Suggested CIFAR-10+1 experiment.** We appreciate the input for original experiments and have evaluated exactly such a setup. We synthesized a set of 11th classes from CIFAR-100s super-classes -- of which there are 20, see below -- and perform CSAT on this 11th class to evaluate the robust accuracy gains on the original CIFAR-10 classes. Results are reported in figure 3 in the rebuttal pdf. We continue to observe a correspondence between average entropy $\overline{H}_c$ and the robustness transfer of a class. As in the main paper, we evaluate $H_c$ on non-adv. trained models. Note that, while the best performing setup ($A=$ *rodent*) with rob. acc of $33.3$\% does not improve upon the best in the main paper ($A=$ *cat*, with a rob. acc $> 37.8$\%), the number of examples in $A$ is only $|A|=2500$, thus less than $5$\% of training data. This implies, that it could be possible to augment datasets with additional classes/examples that provide high $\overline{H}$ which in turn increase robust accuracy quickest. The result being: baseline AT performance with SAT on a very small but highly effective selection of $A$. We will add this experiment and its implications to the final paper. CIFAR-100 Superclasses: aquatic-mammal: ['beaver', 'dolphin', 'otter', 'seal', 'whale'] fish: ['aquarium_fish', 'flatfish', 'ray', 'shark', 'trout'] flower: ['orchid', 'poppy', 'rose', 'sunflower', 'tulip'] container: ['bottle', 'bowl', 'can', 'cup', 'plate'] fruit: ['apple', 'mushroom', 'orange', 'pear', 'sweet_pepper'] device: ['clock', 'keyboard', 'lamp', 'telephone', 'television'] furniture: ['bed', 'chair', 'couch', 'table', 'wardrobe'] insect: ['bee', 'beetle', 'butterfly', 'caterpillar', 'cockroach'] large-carnivore: ['bear', 'leopard', 'lion', 'tiger', 'wolf'] building: ['bridge', 'castle', 'house', 'road', 'skyscraper'] scene: ['cloud', 'forest', 'mountain', 'plain', 'sea'] large-mammal: ['camel', 'cattle', 'chimpanzee', 'elephant', 'kangaroo'] small-mammal: ['fox', 'porcupine', 'possum', 'raccoon', 'skunk'] crustacean: ['crab', 'lobster', 'snail', 'spider', 'worm'] human: ['baby', 'boy', 'girl', 'man', 'woman'] reptile: ['crocodile', 'dinosaur', 'lizard', 'snake', 'turtle'] rodent: ['hamster', 'mouse', 'rabbit', 'shrew', 'squirrel'] tree: ['maple_tree', 'oak_tree', 'palm_tree', 'pine_tree', 'willow_tree'] vehicle: ['bicycle', 'bus', 'motorcycle', 'pickup_truck', 'train'] utility-vehicle: ['lawn_mower', 'rocket', 'streetcar', 'tank', 'tractor'] --- Rebuttal Comment 1.1: Title: Thank you for the additional experiments Comment: Thank you for the additional experiment. I find the paper quite interesting, and have updated my score to accept.
Summary: This paper demonstrates an interesting observation: when we conduct adversarial training, we can only choose to generate adversarial examples on a subset of the training data, if this subset contains the hardest examples, then adversarial training on a subset can achieve competitive performance in robustness over the whole dataset. In addition, models trained in such a manner demonstrate good transferable feature. Strengths: 1. The observation is surprising and interesting, which indicates the transferability of adversarial examples across different classes. 2. The authors conduct comprehensive experiments on various datasets to validate the findings. Weaknesses: 1. One major concern is the contribution, the proposed method neither improve the robust accuracy (or clean accuracy) nor improve the training efficiency (because SAT still uses PGD-7 to generate adversarial examples, which is inefficient). 2. All the experiments are conducted on the $l_2$ bounded adversarial perturbations, more types of adversarial perturbations should be included, especially the $l_\infty$ bounded ones which is popular for benchmarking. In addition, for CIFAR10, the adversarial budget $\epsilon = 0.5$ is very small when considering the dimensionality of the input image. Experiments based on larger adversarial budgets should be included, e.g. $\epsilon = 2$ for CIFAR10. 3. Similar to the first point, the experiments does not demonstrate the advantages of the proposed method. In addition to adversarial training, is the method general and compatible to other popular robust learning method, such as TRADES? Is the observation the same in this context? 4. It would be better if the authors can provide some intuition or explanations for the observations in this paper. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: In addition to the concerns in the "weakness" part that I expect answers from the authors, I have minor questions as below: 1. Why does the authors rank the difficulty of training instances based on the non-adv trained models, because the difficulty ranking can be quite different between adv and non-adv trained models. 2. Figure 8 demonstrates the transferability can vary a bit given the number of instances in the set A. For practitioners, does the authors have some hint for how to choose |A|? Because of the weakness part and the questions in this section, I cannot recommend acceptance based on the current manuscript. I will do a re-evaluation after the authors' feedback. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The limitations and the broader societal impacts are not adequately discussed in the current manuscripts, although ethnicity should not be an issue for general research like this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - **Q1: Marginal contribution.** We reiterate our answer to Q4 of R3, but also add, that SAT works with 1-step FGSM-RS as well (see figure 14 in the appendix): We believe that a paper does not need to provide improved performance nor efficiency, if it can provide a set of experiments that highlight surprising phenomena or counter-intuitive results that could lead to new research directions. We categorize our submission as the latter and we observe 4 reviewers to agree (R1, R2, R5 and R4 themselves). In contrast to related work on robustness transfer and AT efficiency (as discussed in related work and our method section), we discuss a surprising phenomenon within robustness transfer between classes and examples. In our eyes, the most surprising being, that SAT on a single class (e.g. *cat*) can provide better robustness transfer to seemingly unrelated classes (e.g. *truck*) than SAT on a related class would do (e.g. *car*). We kindly ask the reviewer to reconsider her/his expectations on a good scientific paper. - **Q2: Larger margin widths and $L_\text{inf}$.** $\epsilon=0.5$ is the standard for $L_2$ AT on CIFAR-10 (e.g. see [C3]). Irrespectively, we agree that different $\epsilon$ budgets is an interesting set of experiments, although we note that $\epsilon=2.0$ is larger than the smallest $L_2$ distance between CIFAR-10 classes. Consequently, we evaluate for $\epsilon=0.25$ and $\epsilon=1.0$. Find the results in the rebuttal pdf in figure 2 a-c. We continue to observe non-trivial robustness transfer to B for all $\epsilon$ budgets, with diminishing returns for larger $\epsilon$. Notably though, small $\epsilon$ provide very strong transfer, especially on hard classes. W.r.t. $L_\text{inf}$, we concur that such an evaluation is interesting and will do so for the final version. For this rebuttal, we have repeated the S-ESAT experiments for ImageNet-200 $\rightarrow$ Caltech-256 and Flowers-102 using the $L_\text{inf}$ norm with $\epsilon=8/255$. Additionally, we repeated the ESAT experiments on ImageNet-200. Find the results in the rebuttal pdf in figure 1 a-c. We observe very similar characteristics as for $L_2$. - **Q3: Practical advantages and compatibility with TRADES.** Similar to our answer to Q1, we believe that scientific work does not necessitate having immediate practical applications. In our case, we investigate robustness transfer and provide a series of unexpected results. We anticipate, that our insight: robust features generalize surprisingly well to unseen classes and examples (especially for downstream tasks), will spur additional studies on making AT less data hungry. Note that we discuss such a use case in section 4.3 with downstream task transfers. Here it is noteworthy, that it is sufficient to use only $30$\% of training data with S-ESAT and still achieve near baseline AT performance on the target task. This is of particular interest in the foundational setting, where off-the-shelf AT models often don't exist. Additionally, we highlight that SAT can be used to synthesise training sets: that is, add classes or examples providing high entropy $\overline{H}$ that in turn quickly increase robust accuracy. That such a setup can work, is discussed in an additional experiment in the rebuttal pdf, figure 3. For a discussion, we kindly refer the reviewer to our response to R5 (DBwu). TRADES is an adjusted loss to trade-off robustness for clean accuracy. With that, we have no reason to believe that it provides conflicting results with SAT. Nonetheless, investigating the degree of robustness transfer w.r.t. TRADES would be an interesting evaluation for future work, that we think is out of scope for this submission. - **Q4: Missing intuitions.** Precisely this question will likely lead to improved AT. At this point, we cannot provide a resolution, but conjecture that our submission will excite other groups to pursue an answer to the phenomenon subject in our submission. - **Q5: Ranking instances.** We chose ranking instances on non-adv. trained models, as it is closer to existing work in literature (see section A.2 in the appendix). - **Q6: Hint to choose $|A|$.** We kindly ask the reviewer to be specific about the stated claim: ``transferability varies a lot''., That is, on the contrary, we observe consistently strong transfer when using random or hard example rankings. It is to be expected, that robust accuracy is lowest when $|A|$ is low and otherwise high when $|A|$ is large. What is unexpected though, is that the robust accuracy reaches near baseline AT performance when $|A|$ is only about $30$\% of the source task training data. Based on this observation, it would make sense to give the following recommendation: for downstream tasks, around $30$\% of source data is sufficient -- ideally the hardest examples. To reach near baseline performances on the source task, around $50$\% of data is sufficient (again, ideally the hardest). [C3] Croce, Francesco, and Matthias Hein. "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks."ICML, 2020.
Summary: The authors proposed the use of Subset Adversarial Training (SAT), a technique that splits the training data into A and B and constructs AEs only for data in A. Using SAT, they demonstrate how adversarial robustness transfers between classes, examples, and tasks. The authors report several insights: 1) that they've observed robustness transfers by difficulty and to classes in B 2) hard examples to provide better robustness transfer, and 3) Generating AEs on part of the data (e.g, 50%) is enough to get the standard AT accuracy. Strengths: 1. The paper is relatively easy to follow 2. Existing empirical results seem sound Weaknesses: 1. If I understand correctly, the experiments were done only on L2, even though the most common AT is done using L_inf. Can the authors present results using L_inf? 2. I'm missing many details about the AT process, you need to be much more specific for reproduction purposes. which AT is the baseline? did you try other methods? which method did you use? Madry's/TRADES/Other? the paper needs to be much clearer. Many important implementation details are missing. 3. It's hard to validate the results without supplying code/models. 4. The novelty is marginal, due to the fact that much prior art exists on the transferability of AEs, revisiting hard examples, or pruning a part of the training examples throughout the training. The paper will benefit from a comparison of these methods to SAT, so we can see the differences in performance/resources requirements/etc. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See Weaknesses section. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: No discussion on limitations, I suggest the authors to add one. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - **Q1: Evaluation on L\_inf.** We concur, that evaluation on L\_inf is an interesting addition to our submission and will do so for the final version. For this rebuttal, we have repeated the S-ESAT experiments for ImageNet-200 $\rightarrow$ Caltech-256 and Flowers-102 using the L\_inf norm with $\epsilon=8/255$. Additionally, we repeated the ESAT experiments on ImageNet-200. Find the results in the rebuttal pdf in figure 1 a-c. We observe very similar characteristics as for L2. - **Q2: Implementation details.** All training details can be found in the second paragraph of section 4 and in the appendix. As stated in line 416-417, we use traditional adv. training optimizing the cross-entropy loss. Hence we do not use TRADES. - **Q3: Unpublished code/models.** We have compiled an anonymized repository here: https://anonymous.4open.science/r/SAT-BF9B/. Model checkpoints for our main results will be made public with the final version. We kindly ask the reviewer to specify, which model checkpoints they would like to have access to and we gladly provide them during the discussion phase. - **Q4: Marginal contribution.** We believe that a paper does not need to provide improved performance nor efficiency, if it can provide a set of experiments that highlight surprising phenomena or counter-intuitive results that could lead to new research directions. We categorize our submission as the latter and we observe 4 reviewers to agree (R1, R2, R4, R5). In contrast to related work on robustness transfer and AT efficiency (as discussed in related work and our method section), we reveal a surprising phenomenon within robustness transfer between classes and examples. In our opinion, the most surprising being, that SAT on a single class (e.g. *cat*) can provide better robustness transfer to seemingly unrelated classes (e.g. *truck*) than SAT on a related class would do (e.g. *car*). Along these lines, we kindly ask the reviewer to consider what our submission can contribute to the research community. - **No limitations.** Please note, that we have added a limitation and broader impact statement in the conclusion section, as is accepted according to NeurIPS author guidelines. --- Rebuttal Comment 1.1: Comment: I thank the authors for their additional resources and experiment. I've raised my score to 5.
Summary: This work considers the transferability of adversarial robustness for partially adversarially trained models. The authors examine 3 variants of subset adversarial training (SAT): Class SAT, where only samples from selected, difficult classes are adversarially perturbed in training; Example SAT, where only examples with the highest predictive entropy are perturbed; Source-task SAT, where SAT-trained, robust models are fine-tuned on downstream training sets and evaluated for downstream adversarial robustness. They further draw connections between SAT and loss balancing, thus proposing a method for sample-efficient, low-cost adversarial robustness transfer between datasets in foundational settings. The authors report interesting insights from various experiments. From CSAT, it is noted that difficult classes transfer best; class-wise transfer gains are asymmetric; and robustness transfers between seemingly unrelated classes. From ESAT, the authors concur with previous findings that harder examples contribute more to training robust models; the gain in robust accuracy is more rapid than CSAT with respect to the size of subset A; hardness rankings suffer from a possible lack of sample diversity and its performance is matched by random rankings. From S-SAT, they find that SAT on the source dataset with only 30% of AEs can match the robustness transfer gains using normal adversarial training, on the downstream dataset; both clean and robust downstream accuracies are transferred and they are positively correlated under appropriate loss balancing. Strengths: 1. **Data efficiency.** The proposed SAT greatly reduces the amount of data required for adversarial training, which is promising for resource limited or real-world settings. ESAT with only 50% of AEs matches normal AT performance; S-SAT with 30% of AEs matches AT (on the source dataset) as well. 2. **Loss balancing.** I appreciate the discussion on connections between the SAT formulation and loss balancing. I also recognise that both clean and robust accuracy transfer positively from source to downstream tasks, under S-SAT with appropriate loss balancing, which is rare and difficult for adversarial training. 3. **Experimentation.** The experiments are relatively thorough (except that only ResNet-18 and ResNet-50 are SAT-trained) and many details (such as the inter-class robustness transfer statistics for CSAT, or the difficulty rankings of classes) are provided, which give rise to valuable insights. 4. **Presentation.** The presentation of this work is exemplary. It is well-organised, logically-coherent and persuasive. Weaknesses: ### 1 Cost and efficiency 1.1 $\hspace{5pt}$ SAT relies on meticulous pre-processing to discover hard classes and requires access to the per-epoch model weight snapshots of a normally trained classifier, to compute the difficulty metric of Equation 3. This shifts the partial cost of adversarial training to the pre-processing stage and can be costly for large models / datasets in foundational settings. 1.2 $\hspace{5pt}$ More importantly, SAT relies on this non-robust classifier, presumably with identical architecture and training data as the target model for subset adversarial training. This means that all the pre-training and loss balancing procedures have to be repeated for every single new model-dataset combination, which might end up being more costly than normal adversarial training. 1.3 $\hspace{5pt}$ One also notes that hardness ranking is important for the guaranteed performance of SAT, especially for CSAT and to a lesser extent for ESAT (where the authors acknowledge that it is "possible to accidentally select poor performing subsets", as per the easy rankings experiment). ### 2 Experimentation 2.1 $\hspace{5pt}$ SAT is only verified for ResNet-18 and ResNet-50, which is a non-negligible shortcoming, for the reasons described above. 2.2 $\hspace{5pt}$ Does SAT hold for other convolutional and non-convolutional architectures of variable capacity? 2.3 $\hspace{5pt}$ Is it not cost and time prohibitive to run SAT for more than 2 baseline models? Would this also be a barrier that impedes the practical adoption of SAT? ### 3 Scaling up 3.1 $\hspace{5pt}$ SAT experiments are notably performed on smaller datasets with fewer (or a subset of) classes. The computational complexity of SAT seems to scale non-negligibly with the number of classes (CSAT) and the size of the dataset (ESAT), which is not ideal on real-world datasets with fine-grained labels in foundational settings. 3.2 $\hspace{5pt}$ As aforementioned, even considering the S-SAT setup (where one does not need to do SAT on every downstream dataset), it is very costly to add a new model to the SAT experiments because of the method's dependence on snapshots of a normally-trained, non-robust version of the same model, for hardness rankings and loss balancing. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: ### Major concerns. 1. Could the authors address the concern about shifting the time/computational cost of adversarial training from the training phase to pre-training / pre-processing phases? 2. Furthermore, even if the non-robust, pre-trained weight snapshots of the target model are available off-the-shelf, using S-SAT still further requires one to adversarially pre-train from scratch a model on a large-scale dataset. Since the transferred robustness of S-SAT is comparable to that of normally AT'd models on the source dataset, could the authors elucidate what is the particular advantage of SAT (as opposed to simply fine-tuning with an off-the-shelf robust, AT'd model)? 3. Could the authors suggest an efficient method of validating SAT for different dataset-model combinations (which does not require recomputation of task-specific hardness rankings and loss balancing settings); or alternatively, validate SAT on diverse convolutional and non-convolutional baselines? ### Minor comments. 4. What do the authors think about the connections between loss balancing and oversampling? How does CSAT perform for long-tailed or imbalanced datasets? 5. Have the authors considered the impact of homogeneous and heterogeneous data splitting methods (and resultant label or covariate shifts) on SAT models' clean and robust accuracy? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the societal and ethical limitations of their work. This work strives to improve the foundational adversarial robustness of AI systems in practice; experimental and implementation details have been documented. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - **Q1 Cost and Efficiency: SAT relies on meticulous pre-processing.** Given a new architecture and a new dataset, the experimental process of finding good performing configurations involves training multiple models. During training, the entropy statistics for SAT can be cheaply computed without any substantial overhead, since softmax is applied to every instance anyway. Consequently, if SAT is to be adopted, such statistics can be computed on the fly. - **Q2 Experimentation: Why use S-SAT over finetuning off-the-shelf adv. trained model.** Our assumption w.r.t. foundational models is, that off-the-shelf adv. trained models do not exist since they are prohibitively expensive to train. In such a case, we'd argue that utilizing S-SAT can reduce the computational complexity down to 30\% and still reach competitive adv. robustness on downstream tasks. - **Q2 Experimentation: SAT is only verified on ResNet-18 and ResNet-50.** First and foremost, we reiterate, that our study is not on improving AT efficiency, but on revealing To show this phenomenon generalizes to other architectures, we trained and evaluated SAT for $\epsilon_2 = 0.5$ with $A$ containing 50\% of data on wide-resnet with width 16 and depth 70, as is common in literature on AT[C1, C2]. Our baseline WRN-70-16 achieves a clean accuracy of 83.8\% and robust accuracy of 62.1\%. Our ESAT achieves a clean accuracy of 84.8\% and a robust accuracy of 57.0\%. These results are in line with our main paper observations. Code for this new experiment can be found here: https://anonymous.4open.science/r/SAT-BF9B/ . - **Q2.3: Cost prohibitive baseline training?** We have shown that 50\% of training data recovers baseline performance for ESAT and 30\% of data for S-ESAT. If adopted, no baseline adversarial training is needed in practice -- only if needed for sanity check or comparison. Clean baselines (without AT) on the other hand, remains necessary to determine the entropy ranking. - **Q3 Scaling: SAT is difficult to scale.** While the full range of experiments is costly to scale, the point of SAT is not to provide a definite recipe for improving AT. Instead, SAT provides a means to investigate facets of AT. - **Q4 and Q5 Imbalanced datasets.** Thank you for providing two very interesting avenues for future work. Given our results on robustness transfer and loss balancing, it is indeed plausible to assume that even an undersampled and hard class can contribute substantially to training robust features for the whole model. At this point though, it is difficult to make any strong predictions. [C1] Rebuffi, Sylvestre-Alvise, et al. "Fixing data augmentation to improve adversarial robustness." arXiv preprint arXiv:2103.01946 (2021). [C2] Gowal, Sven, et al. "Improving robustness using generated data." NeurIPS (2021). --- Rebuttal Comment 1.1: Title: Thank you for the Clarifications Comment: I have read the authors' rebuttal and other reviews in detail. I maintain that the reasons for acceptance outweigh reasons for rejection. In light of other reviews, I have increased my score from 5 to 7. Although I still have reservations about the cost overhead of using SAT in practice, I believe SAT's findings regarding the transferability (across classes and hard examples) of adversarial training to be fresh and noteworthy. I thank the authors for clarifying that the main objective of this work is "not to provide a definite recipe for improving AT" but rather to provide "a means to investigate facets of AT". I look forward to future work that further connects robustness transfer, loss balancing and sampling.
Rebuttal 1: Rebuttal: We thank all reviewers for their time and valuable feedback on our manuscript. We are very pleased to read that R1, R2, R4 and R5 found our insights interesting, surprising (R4, R5) and that it may provide insights for future works (R1). Our experiments were commented to be clear and thorough (R3, R5) and comprehensive (R4). Furthermore, we highlight that our presentation was found to be exemplary and persuasive by R2 and was rated excellent by R5. We address each reviewer in individual comments and provide an additional pdf with figures supplementing our rebuttal. Pdf: /pdf/3e79473e420e6e09a4fb762ae67d8967b1385ced.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper investigates the transferability of adversarial robustness among different classes and different examples. Different from previous studies, authors split the training dataset into two groups and only apply adversarial training on one group while another one using clean training. Based on experiment results, authors obtain several interesting observations, including classes without adversarial training can still have some capacity to defense against adversarial attacks, hard classes and examples can provide better robustness transferability than easier ones, and only 50% of training data is sufficient to recover the performance with vanilla adversarial training method in terms of robustness, etc. Strengths: 1. This paper explores the transferability of adversarial robustness from a new perspective and hence proposes a novel training mechanism to study. 2. This paper conducts a series of experiments to investigate the robustness transferability. Based on experiments, some interesting observations are obtained, which may give some new insights for future works. Weaknesses: 1. The motivation of this work is not quite clear. Although authors find that some classes still can obtain capacity to defense against adversarial attacks without adversarial training, data samples of these classes are available in the training dataset. Hence, applying adversarial training on all data samples of all classes directly can achieve much better robustness, comparing with the transferred robustness obtained in this paper. Hence, it's not clear why authors study this kind of transferability when all training data are available and which scenarios are suitable for the problem studied in this paper. 2. Based on experimental results, authors claim that utilizing only half training data can achieve comparable robustness performance with vanilla adversarial training methods. However, related experiments only report robustness of different methods. Considering the trade-off between clean accuracy and robustness in adversarially trained models, it would be better if corresponding clean accuracy of each method can also be provided. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Can authors discuss more about possible application scenarios or benefits of the study conducted in this paper? Although the transferability of adversarial robustness between different classes and examples under different training mechanisms is observed in this paper, it seems this observation cannot bring benefits to real applications. In practice, simply applying adversarial training on all data samples of all classes can achieve much better robustness. Hence, it‘s not clear what's the benefits of the study conduced in this paper in real applications, considering all data samples of all classes are used in training stage. 2. Can authors provide some understanding about why the rank class split cannot consistently outperform the random class split on CIFAR10 validation set in terms of both clean accuracy and robustness as shown in Figure 4? Does this result indicate the hardness based split on class level cannot boost model overall performance? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Questions about how to apply the proposed training mechanism and observations obtained from experiments in real applications to boost the robustness of models need to be discussed in detail. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - **Q1: Motivation not entirely clear. How could SAT be useful in real applications?** We emphasize, that our study is not one of improving existing methods, but of improving our understanding of adversarial training (AT) and its robustness transfer. In that, we find our observations to be of high interest to the community -- as R2, R4, R5 and R1 (theirself) noted. We anticipate, that our insight: robust features generalize surprisingly well to unseen classes and examples (especially for downstream tasks), will spur additional studies on making AT less data hungry. Note that we discuss such a use case in section 4.3 with downstream task transfers. Here it is noteworthy, that is sufficient to use only $30$\% of training data with S-ESAT and still achieve near baseline AT performance on the target task. This is of particular interest in the the foundational setting, where off-the-shelf AT models often don't exist. Additionally, we highlight that SAT can be used to synthesise training sets: that is, add classes or examples providing high entropy $\overline{H}$ that in turn quickly increase robust accuracy. That such a setup can work, is discussed in an additional experiment in the rebuttal pdf, figure 3. For a discussion, we kindly refer the reviewer to our response to R5 (DBwu). - **Q2: Detailed comparison of SAT impact on clean accuracy.** Please note, that all experiments starting from figure 4 contain clean accuracies. Figure 5 (CSAT), reports clean accuracies in the appendix in figure 13. To summarize: for all CSAT and ESAT experiments, we observe decreasing clean accuracy with increasing $|A|$ -- as is expected for adv. trained models. Interestingly, this is not the case for S-CSAT and S-ESAT: here we observe *increasing* clean accuracy with increasing $|A|$. - **Q3: Why does the informed ranking in SAT converge to random rankings for large $|A|$ (e.g. fig. 4)?** For all rankings, we observe convergence to the full AT baseline. The difference here lies in how quickly each of these rankings achieve this convergence. Here, the informed ranking can improve robust accuracy quickest with fewest training examples. Overall, this advantage diminishes though with larger $|A|$. --- Rebuttal Comment 1.1: Title: Thank you for your clarifications Comment: I have read authors clarifications and I think they have addressed all my concerns in my review. Hence, I raised my score to 5.
null
null
null
null
null
null
A Dynamical System View of Langevin-Based Non-Convex Sampling
Accept (spotlight)
Summary: This paper presents a general framework for studying the convergence of last-iterate, noisy, and possibly biased Langevin-like discrete time approximations to the continuous Langevin flow for sampling from a distribution. Strengths: This paper is (with one small quibble which I will explain below) very well written and well explained. Using the machinery of asymptotic pseudotrajectories, it establishes asymptotic convergence of a wide range of discrete time sampling schemes to the Boltzman distribution $e^{-f}$. The advantage of the pseudotrajectory framework is that it replaces standard "Cauchy-type" convergence (which requires checking that an infinitely long tail of iterates converge) with a weaker notion of convergence that only requires "finite length" tails. The upshot is that, due to cited source [6], this weaker sense of convergence is sufficient for convergence to a limit point of the original flow $\Phi$, provided that $\Phi$ has a unique fixed point. This allows for much more freedom in the analysis, and hence encompasses a wider range of conditions. The technique also has advantages of discretization-based approaches as explained by the author. Weaknesses: My biggest complain with the paper is that perhaps the most interesting part of the framework is the 2-line proof of Theorem 2 relegated to the appendix. This argument relies on the limit-set characterization of [6] to show that asymptotic pseudotrajectories are sufficient to ensure convergence to the fixed point of the Langevin flow, namely the correct Boltzman distribution. I think the authors should explain this point in the body, and explain which theorem of [6] is being invoked (I suspect Thm 0.1), and how it's conditions are met. That way, the reader understands not simply how the proof framework of the paper works, but why we should believe that is somehow the "correct" one. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What is the theorem of [6] being invoked here? Can the authors explain its conditions and why they are met. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This work is purely asymptotic, and provides no rates of convergence. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your input and remarks. We reply to your questions below, and we will revise our manuscript accordingly in the upcoming revision. > My biggest complain with the paper is that perhaps the most interesting part of the framework is the 2-line proof of Theorem 2 relegated to the appendix. This argument relies on the limit-set characterization of [6] to show that asymptotic pseudotrajectories are sufficient to ensure convergence to the fixed point of the Langevin flow, namely the correct Boltzman distribution. I think the authors should explain this point in the body, and explain which theorem of [6] is being invoked (I suspect Thm 0.1), and how it's conditions are met. That way, the reader understands not simply how the proof framework of the paper works, but why we should believe that is somehow the "correct" one. We agree with the reviewer. We only made this choice due to the page limit. We will expand the proof of the theorem to include more details. Indeed, Theorem 0.1 (see Theorem 5.7 (i) in M. Benaïm (2006), "Dynamics of stochastic approximation algorithms" for an extended version) is used. > What is the theorem of [6] being invoked here? Can the authors explain its conditions and why they are met. Theorem 0.1 is used here. The assumptions are: - $(\mathrm{law}(X_t))_t$ is an APT in the corresponding metric space (which we prove in Theorem 1 in the paper) - $(\mathrm{law}(X_t))_t$ is precompact (which we prove in Theorem 2 that it is equivalent to Assumption (10)) - $\Phi$ is integrable (which is implied by Assumption 1 in our paper) These imply that the limit-set of $(\mathrm{law}(X_t))_t$ is an *internally chain transitive* set. These conditions are met becuase Assumption (10) implies that the trajectory $(X_t)$ is precompact in the Wasserstein space. If the trajectory is also a Wasserstein APT, then it is guaranteed to converge to an ICT set of the flow corresponding to the SDE. For the case of the Langevin SDE, we can show that the ICT set is the singleton $\{ \pi \}$. Following the notation of M. Benaïm (2006), "Dynamics of stochastic approximation algorithms" section 6.2, let $M$ to be the set of absolutely continuous probability measures in $W_2$, $\Lambda = \{\pi\}$ and define $V(\mu) = D_{\mathrm{KL}}(\mu \mid \pi)$. Then, it is clear that $V$ is a Lyapunov function of the Langevin dynamics, whose value is strictly decreasing along the flow (as the time derivative of V along the flow is negative of the relative Fisher information, which is strictly positive for all measures other than $\pi$). Thus, all requirements of Proposition 6.4 are satisfied, showing that every ICT set is contained in $\Lambda$. In another words, the only point in the ICT set is $\pi$. --- We hope that the above addresses your questions - but please let us know if any of the above is not sufficiently clear. Thank you again for your input and positive evaluation, The authors --- Rebuttal Comment 1.1: Title: Thank you! Comment: I will likely maintain my score, but please add the two-line proof mentioned in my review to the body. And please include your exposition above in Theorem 5.7 (i) in M. Benaïm (2006) to the appendix. --- Reply to Comment 1.1.1: Comment: Thanks again for your time and your fruitful comments. We will definitely include what was discussed in the final revision.
Summary: The authors offer theoretical guarantees on the convergence of the last iterate of a very generic class of sampling methods to the stationary distribution for a very large classe of sampling schemes in non-convex settings. This is achieved by showing that a large class of discrete sampling schemes can be mapped to continuous time ones and then to a Wasserstein asymptotic pseudo-trajectory. The distribution at large times of this process then converges to the stationary one under quite general assumptions on the Langevin noise and an opportune annealing scheme for the learning rate. Strengths: The paper addresses the interesting and actively researched problem of sampling non-convex, high dimensional densities. It explores a general framework that can be specialised to a wide range of sampling schemes, thereby providing a valuable guarantee for practical sampling scenarios. Notably, the authors employ a novel proof technique, which, to the best of my knowledge, is unique in the context of sampling problems. This innovative approach contributes to the originality of the paper and distinguishes it from existing research in the field, which mostly study these problems as gradient flows. Furthermore, the paper exhibits exceptional writing quality, being highly readable, well-structured, and accessible to a broad audience. These qualities enhance its overall impact and make it easier for readers to grasp the main idea. Considering these strengths, I strongly believe that this paper is of high caliber. It offers significant contributions to the field, with its practical relevance, original proof technique, and excellent presentation. Weaknesses: I think this is a solid paper without major weaknesses, so I just have a minor remark. The main result of this paper is showing that at very large times the sampling schemes converge to the desired distribution. It would be interesting to relax this condition and obtain bounds on the Wasserstein distance for a large (but finite) number of steps. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Within this framework, is it possible to extract how the typical time to convergence scales with the size of the system? (for example in the sense of obtaining a lower bound on the number of steps T(epsilon) to obtain a distance between the distribution after T steps and the stationary one less than epsilon). 2. Related to the previous point: can you say something quantitative on the performance of ORMM beyond requiring less gradient calls? 3. Can you comment on the case where sigma(x_k)=1/beta_k in the definition of LRM, where beta_k is positively diverging as k increase? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: This paper contains no applications or experiments, as it's expected by a paper of this kind. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your input and remarks. We reply to your questions below, and we will revise our manuscript accordingly in the upcoming revision. > The main result of this paper is showing that at very large times the sampling schemes converge to the desired distribution. It would be interesting to relax this condition and obtain bounds on the Wasserstein distance for a large (but finite) number of steps. From our analysis, a doubly exponential bound can be deduced: $\mathcal{W}_2$ error = $O\left(\frac{1}{\log\log n}\right)$. However, we believe that this loose bound offers no significant advantage over asymptotic convergence. Consequently, we have chosen to omit it. It is important to note that obtaining polynomial bounds is infeasible due to NP-hardness. We are also able to obtain results that are close in spirit to non-asymptotic results: Assuming step-sizes $\gamma_n = \Theta(n^{-1})$ and exponential convergence of continuous-time dynamics (which holds under, e.g., LSI), we can show that an LRM scheme converges in $\mathcal{W}^2_2$ at a rate of $O(n^{-1})$ under a warm-start condition, **under the same noise and bias conditions**. While we cannot estimate the duration of the burn-in phase, our argument holds for a wider range of stochastic and biased algorithms than in the current literature. > Within this framework, is it possible to extract how the typical time to convergence scales with the size of the system? (for example in the sense of obtaining a lower bound on the number of steps T(epsilon) to obtain a distance between the distribution after T steps and the stationary one less than epsilon). Lower bounds on the time complexity for sampling problems is a very hard problem. There has been some recent works and progress is being made for specific algorithms and simpler problem classes, see, e.g., Chatterji et al. (2021) "Oracle Lower Bounds for Stochastic Gradient Sampling Algorithms" and Chewi et al. (2023) "Query lower bounds for log-concave sampling". In our setup, the class of target distributions is rich enough to encompass NP-Hard problems, implying that the most difficult problems within this class require at least exponential time to solve. > Related to the previous point: can you say something quantitative on the performance of ORMM beyond requiring less gradient calls? Unfortunately, we are unable to provide such comparisons in theory, although we point out that the lack of such quantitative bounds for ORMM is not specific to our framework: It is present already in the context of purely **deterministic convex-concave optimization**, where optimistic methods originate, since their convergence rates do not surpass those of their non-optimistic counterparts. In most cases, the advantages of these methods are largely empirical. In this regard, our primary contribution lies in offering flexibility: By recycling past gradients, one can preserve the asymptotic convergence of randomized mid-point, demonstrating the potential to maintain favorable convergence behavior while saving 50% of the per-iteration costs. > Can you comment on the case where sigma(x_k)=1/beta_k in the definition of LRM, where beta_k is positively diverging as k increase? This is a very nice question. Diffusions with vanishing diffusion coefficient usually arise from simulated annealing processes. The problem with these time-inhomogeneous systems is that the flow (the $\Phi$ in the paper) is not going to be well-defined. We can make two possible comments here: 1. One can take $\Phi$ to be the flow of the resulting *ODE* when $\beta_k \to \infty$ (rather than the SDE), and the LRM scheme with the solutions of the ODE. This is a similar approach as in our reference [6] for "asymptotically autonomous with limit equation" (see section 2 of [6]). Our analysis goes through without any changes, and the result will be "$(X_t)$ will almost surely converge to an ICT set of the ODE; in the case of annealed Langevin dynamics as the main process, this means that $(X_t)$ converges to a *stationary point* of the potential." This result is, however, unsatisfactory, as one usually performs simulated annealing to converge to a **global optimum**. 2. One can convert the time-inhomogeneous system to a time-homogeneous system (by, e.g., introducing a new variable for tracking time). This dynamics will have a well-defined flow. However, this flow does not have any equilibria (as the time variable goes to infinity). We will definitely explore this direction in our future works. --- We hope that the above addresses your questions - but please let us know if any of the above is not sufficiently clear. Thank you again for your input and positive evaluation, The authors --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. The responses were detailed and insightful, and I am happy to keep the current score --- Reply to Comment 1.1.1: Comment: Thanks a lot for your time and encouraging comments.
Summary: The work studies when a discretized Langevin dynamics under the Robbins-Monro-type stepsizes can converge to the Gibbs distribution. The paper obtains asymptotic results with very mild assumptions, and the framework not only includes Euler discretization, but many other sampling schemes as well, such as mirror Langevin, proximal, randomized mid-point and Runge-Kutta methods. The analysis builds upon constructing a continuous-time trajectory via interpolating the iterates, Wasserstein asymptotic pseudotrajectory and checking the stability condition by invoking the dynamical system theory. Strengths: The paper is built upon solid mathematical analysis and it is a nice contribution to the vast literature of Langevin algorithms in machine learning. The builds provides a unified framework for the asymptotic guarantees under the Robbins-Monro scheme. Weaknesses: Even though the mathematical theory is nice, the results may not have too much practical importance. The author(s) emphasized that this work is about asymptotic analysis instead of the non-asymptotic analysis. However, because the results are of asymptotic nature, it not clear to me what insights the results can provide concerning the schemes such as mirror Langevin, proximal, randomized mid-point and Runge-Kutta methods, because you only have asymptotic guarantees, it is impossible to use these results to compare these algorithms with more classic and basic Euler discretization of Langevin algorithms. Also, I find the discussion of existing literature less satisfactory. Some of the claims and statements may not be that accurate. For example, on page 1, ``Existing guarantees suffer from the drawback of lacking guarantees for the last-iterates’’ ``the convergence is typically given on the averaged iterates instead of the more natural last iterates’’. To the best of my knowledge, there are numerous works on Langevin algorithms in the past decade and most of them are about last iterates guarantees in Wasserstein, KL or other distances; see e.g. Dalalyan and Karagulyan (2019), Dalalyan and Riou-Durand (2020), Ma et al. (2019), Ma et al. (2021), Raginsky et al. (2017), Gao et al. (2022). ``and little is known beyond the elementary schemes of stochastic gradient Langevin dynamics.’’ This is not accurate either. There have been many studies in the literature about underdamped Langevin, high-order Langevin, non-reversible Langevin and other variants of SGLD such as Dalalyan and Riou-Durand (2020), Hu et al. (2020), Ma et al. (2021), Mou et al. (2021), Gao et al. (2022). Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) Page 4. ``The usual Lyapunov-type analysis for sampling algorithms focuses on bounding the change in relative entropy across iterations…” ``this makes the Lyapunov analysis applicable only to the simple Euler-Maruyama discreteization of (LD)’’ I am not too sure whether these two statements are accurate. Lyapunov functions are often used in analyis of Langevin algorithms, to show uniform bounds on the moments, e.g. Raginsky et al. (2017), in the coupling methods, e.g. Dalalyan and Riou-Durand (2020). It is definitely applicable beyond the Euler-Maruyama scheme, e.g. Dalalyan and Riou-Durand (2020) uses the disretization proposed in Cheng et al. (2018) to analyze kinetic Langevin dynamics. (2) One technical point I would like to see more discussions is that in equation (1) in your Definition 1, it is for fixed $T>0$. Actually the dependence on $T$ can be exponential in $T$, which is quite common for weak approximation error in the literature. However, in order for the Langevin algorithm to converge to the Gibbs distribution, one often needs uniform-in-time guarantees, and would you need $T\rightarrow\infty$ in order to obtain Theorem 2? (3) Theorem 2 is a very nice and clean result. But I am surprised that you only need assumption (10) which is an assumption on the discretized dynamics only. The reason I am asking is that it seems to me that Assumptions 1-3 alone do not guarantee that the continuous-time Langevin SDE has a unique stationary distribution. If Theorem 2 holds, that means assumption (10) can imply that the continuous-time Langevin SDE has a unique stationary distribution? The existence of $\pi$ is necessary for Theorem 2 to hold. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I did not see such discussions about limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate Reviewer vdDX's thoughtful criticisms. After a thorough reading, we believe that these critiques primarily stem from presentational issues, which we fully acknowledge exist and will commit to improving them following your suggestions. In light of this, we sincerely ask for a re-evaluation of our work based on the points addressed in our rebuttal. We are open to further discussions and eager to address any additional questions or concerns they may have. > Because the results are of asymptotic nature, it not clear to me what insights the results can provide such as mirror Langevin, proximal, randomized mid-point and Runge-Kutta methods. In our revised version, we intend to highlight the following two insights that our results offer to practitioners: 1. **Validating existing methods:** We first note that methods like mirror Langevin and randomized mid-point currently lack even asymptotic guarantees in fully non-convex scenarios, such as sampling from neural network-defined distributions. Our work fills this gap by offering the first solid justification for these schemes, supporting practitioners in utilizing these methods confidently. 2. **Facilitating new algorithm design:** Our work motivates novel sampling methods through a straightforward verification of **Assumptions 3**. An illustrative instance involves the randomized mid-point method and Runge-Kutta integrators, wherein a substantial 50% reduction in computation per iteration can be achieved without compromising convergence by simply recycling past gradients; see **Example 2**. We do acknowledge the inherent limitations of our approach: The balance between the benefits of saving gradient oracles and potential drawbacks remains an open question, necessitating case-by-case practical evaluation. Nevertheless, our theory provides a flexible algorithmic design template that extends beyond the current literature's scope. > There are numerous works on Langevin algorithms in the past decade and most of them are about last iterates guarantees... We apologize for the confusion and would like to clarify our intended meaning concerning the absence of guarantees for **fully non-log-concave setups**. This refers to scenarios lacking assumptions of convexity or functional inequalities like LSIs or Poincaré. The references cited by the Reviewer fall within the latter classification: Dalalyan and Karagulyan (2019), Dalalyan and Riou-Durand (2020), and Ma et al. (2021) are centered around the (strongly-)log-concave context. Meanwhile, assumptions 1-3 in Raginsky et al. (2017) and assumption 1 in Gao et al. (2022) imply Poincaré inequalities. We express our gratitude to the Reviewer for bringing this point of confusion to our attention; we will make the necessary modifications accordingly. > On ``little is known beyond the elementary schemes'' We would like to clarify that our reference to "elementary schemes" pertains specifically to discretizing the **Langevin diffusion** mentioned in equation (LD). While Reviewer vdDX has rightfully pointed out the existence of elementary discretization schemes for **other SDEs**, such as the underdamped or higher-order Langevin, our primary focus in this paper remains centered on the Langevin diffusion. [It is worth noting that our framework can be adapted to analyze these schemes.] As for the existing SGLD variants, which encompass variance reduction methods, it is important to highlight that their core emphasis continues to revolve around unbiased gradients. ### Q1 We thank the Reviewer for raising this question, which again stems from our presentational issue: Our primary focus is the impact of **bias** in these analyses. As the Reviewer rightfully pointed out, handling noisy gradients' impact on entropy is possible, but bias effects are more complex for current Lyapunov analyses. Consequently, these analyses are unsuitable for application to schemes like the mirror Langevin approach. It is within this context that our framework offers an alternative proof methodology, bypassing the necessity to track any Lyapunov function. ### Q2 This is a very nice question. Indeed, the dependence on $T$ is exponential (see the last equation in line 547). However, the defining property of being a WAPT is that for **any** fixed $T > 0$, as the beginning of the time window $[t, t+T]$ goes to infinity, the sup distance goes to zero; see **Definition 1**. Our analysis then shows that it suffices to have a **finite** (but arbitrary) $T$ in equation (1), and no uniform control in $T$ is required. ### Q3 The uniqueness of the stationary distribution of the continuous-time Langevin diffusion can be established in various ways under our assumptions. One possible route, in line with our dynamical system analysis, is as follows. 1. Assumption 1 ensures that the continuous-time Langevin diffusion has strong solutions; this is standard. Denote the distribution at time $t$ by $\mu_t$. 2. Following the notation of M. Benaïm (1999), "Dynamics of stochastic approximation algorithms" section 6.2, let $M$ be the set of absolutely continuous probability measures in $W_2$, $\Lambda =$ {$\pi$} and define $V(\mu) = D_{\mathrm{KL}}(\mu \| \pi)$. Then, it is clear that $V$ is a Lyapunov function of the Langevin dynamics, whose value is strictly decreasing along the flow (as the time derivative of $V$ along the flow is negative of the relative Fisher information, which is strictly positive for all measures other than $\pi$). 3. Thus, all requirements of Proposition 6.4 are satisfied, showing that every limit set of $\{\mu_t\}_{t\geq 0}$ is contained in $\Lambda$. In other words, the only point possible limit point is $\pi$. --- We hope that the above addresses your questions - but please let us know if any of the above is not sufficiently clear. Thank you again for your input and thoughtful criticisms, The authors --- Rebuttal Comment 1.1: Comment: Thanks for the very detailed response. I think some of your explanations really helped me to understand and appreciate your work. Even though the practical relevance of your work is still not that convincing to me, I do appreciate your work provide a unified framework for sampling a very general class of targets, which is a nice addition to the literature of Langevin algorithms. I will raise my score. --- Reply to Comment 1.1.1: Title: Official Comment by Authors Comment: Thank you for the constructive criticism and the re-assessment; we promise to revise our manuscript to incorporate the discussion with the Reviewer.
Summary: This paper gives a unified asymptotic analysis of a broad class of stochastic algorithms that encompasses several variants of the Langevin algorithm. In particular, it can handle issues of inexact gradients, bias, noise, and problems beyond gradient-based algorithms. The key technique is the introduction of an intermediate process, termed the *Picard process*, which fits between the iterates and the continuous-time Strengths: The main strength of this paper is the unified nature of the convergence results. The general framework encompasses a fairly large variety of Langevin-type algorithms, and likely a decent class of problems beyond Langevin algorithms. It also gives a clean, unified analysis. Weaknesses: My only major criticism is that the paper seems to overstate the novelty of the analytic methods. In particular, I have not seen the specific Picard process defined here, but several works introduce related processes that fit between the iterates and the continuous time process. This enables similar triangle-inequality based convergence proofs. For example (but not limited to): Chau, Ngoc Huy, et al. "On stochastic gradient langevin dynamics with dependent data streams: The fully nonconvex case." SIAM Journal on Mathematics of Data Science 3.3 (2021): 959-986. Bubeck, Sébastien, Ronen Eldan, and Joseph Lehec. "Sampling from a log-concave distribution with projected Langevin Monte Carlo." Discrete & Computational Geometry 59 (2018): 757-783. Additionally, last iterate convergence guarantees are not particularly rare. Both works cited above give last-iterate bounds from corresponding stationary distributions. Many of the works citing these papers do as well. On a minor note, there are some confusing notations. * $b_k$ is used for the bias, but then gets re-defined in the proof of Lemma 2. * Using $\sigma$ for both the diffusion matrix and the variance bound on the gradient noise is mildly confusing. * At the end of the proof of Lemma 1, it should be the limit as $n\to\infty$, instead of $k\to \infty$ Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: * Can you give examples beyond Langevin-type algorithms for which you can apply the method? * Can you get some more quantitative guarantees beyond asymptotic convergence? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: These are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are sincerely grateful for pointing out the missing references and remarks. We reply to your questions below, and we will revise our manuscript accordingly in the upcoming revision. > My only major criticism is that the paper seems to overstate the novelty of the analytic methods. In particular, I have not seen the specific Picard process defined here, but several works introduce related processes that fit between the iterates and the continuous time process. This enables similar triangle-inequality based convergence proofs. For example (but not limited to): Chau, Ngoc Huy, et al. "On stochastic gradient langevin dynamics with dependent data streams: The fully nonconvex case." SIAM Journal on Mathematics of Data Science 3.3 (2021): 959-986. Bubeck, Sébastien, Ronen Eldan, and Joseph Lehec. "Sampling from a log-concave distribution with projected Langevin Monte Carlo." Discrete & Computational Geometry 59 (2018): 757-783. We agree with Reviewer hN2a's assessment and sincerely appreciate the references they have provided. We will duly update the paper and highlight our contributions, as well as acknowledge that similar ideas have been explored in prior works. What distinguishes our work from the existing literature is the advantage of generalizing the Picard process to encompass a vastly wider class of algorithms, specifically the Langevin-Robbins-Monro schemes. Moreover, the integration of the Picard process with the theory of asymptotic pseudo-trajectories plays a pivotal role in our analysis, and both of these aspects present original contributions. > Additionally, last iterate convergence guarantees are not particularly rare. Both works cited above give last-iterate bounds from corresponding stationary distributions. Many of the works citing these papers do as well. We agree that there are numerous results on last iterates for different settings and algorithms. What we intended to express is that little is known in the **generic non-log-concave setup** (without convexity or functional inequalities) and scenarios involving **biased discretization**. For example, Bubeck et al. (2018) is for log-concave target distributions, and Chau et al. (2021) is for unbiased gradient estimates (see Eqn. (7) in their paper). To alleviate any misunderstandings, we are committed to revising our exposition on related work. > On a minor note, there are some confusing notations. b_k is used for the bias, but then gets re-defined in the proof of Lemma 2. Using sigma for both the diffusion matrix and the variance bound on the gradient noise is mildly confusing. At the end of the proof of Lemma 1, it should be the limit as n -> infty , instead of k -> infty. Thanks a lot for pointing this out. The end of the proof of Lemma 1 should be $n \to \infty$ as you mentioned. We will make the notation more succinct in the final version. > Can you give examples beyond Langevin-type algorithms for which you can apply the method? Essentially, our framework can be applied across a spectrum of continuous-time dynamics, such as Underdamped/Higher-order Langevin, Hamiltonian Monte Carlo, Neural SDEs, and more. In addition, the scope extends to diverse discretization methods for these dynamics. Examples of such include Euler-Maruyama, leap-frog, and symplectic integrators. > Can you get some more quantitative guarantees beyond asymptotic convergence? At the expense of stronger assumptions, yes: We are able to obtain results that are close in spirit to non-asymptotic results. Assuming step-sizes $\gamma_n = \Theta(n^{-1})$ and exponential convergence of continuous-time dynamics (which holds under, e.g., LSI), we can show that an LRM scheme converges in $\mathcal{W}^2_2$ at a rate of $O(n^{-1})$ under a warm-start condition, **under the same noise and bias conditions**. While we cannot estimate the duration of the burn-in phase, our argument holds for a wider range of stochastic and biased algorithms than in the current literature. However, as our primary focus lies in the generic setting where such assumptions are unavailable, we have chosen to defer these studies to future work. --- We hope that the above addresses your questions - but please let us know if any of the above is not sufficiently clear. Thank you again for your input and positive evaluation, The authors --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for your updates. As I mentioned, I am quite positive on this work. The main thing for me, is to place it into the context of existing work better, which I believe you have done. --- Reply to Comment 1.1.1: Title: Official Comment by Authors Comment: Thank you for the expert's review. We also express our gratitude again for pointing out the missing link to prior work.
Rebuttal 1: Rebuttal: Dear AC and dear reviewers, We wish to express our sincere gratitude for your dedicated efforts. Your insightful critiques and favorable evaluation have been acknowledged, and we have responded to all your inquiries in a detailed point-by-point manner, presented below. After thorough consideration of your remarks, we wish to affirm the validity of the concerns highlighted by the reviewers. We firmly believe that these concerns can be aptly addressed through minor revisions, as detailed in our individual response. We appreciate your invaluable contributions towards refining our manuscript. With utmost appreciation, The authors
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Scale-Space Hypernetworks for Efficient Biomedical Image Analysis
Accept (poster)
Summary: The authors propose a unified approach based on Hypernetworks (HN) to model the accuracy-efficiency pareto front for medical applications. The authors claim the following contributions: - Introducing Scale-Space HyperNetworks (SSHN) a single model that given a rescaling factor generates weights for corresponding model with reduced spatial dimensions of intermediate features. - Demonstrating the performance of SSHN on varing datasets while reducing the number of FLOPs by up to 50%. Strengths: - The paper is well-written and easy to follow. - It tackles an important problem of computational heterogeneity during inference. - The proposed method is simple and easy to implement/reproduce. Weaknesses: - The authors failed to cite pioneering HN works [1,2,3]. Specifically [1] which is highly relevant and related to this work. - The novelty of this paper is limited. Pareto Front Learning (PFL) [1] is a widely explored field within machine learning, encompassing various lines of research. It has gained significant recognition and has a well-established presence due to its diverse range of investigations and studies. PFL is a computational approach that aims to find the optimal trade-offs between multiple conflicting objectives. It involves identifying a set of solutions that lie on the Pareto front, representing the best possible outcomes for each objective, without sacrificing performance in other areas. By exploring the Pareto front, decision-makers can make informed choices that balance competing objectives and achieve more comprehensive and balanced solutions. It seems like the paper’s core novelty has already been presented by previous works. I suggest the authors to better explain how their approach differs from highly related works in PFL [1,4]. - I wonder how well the proposed approach works on non-medical datasets e.g. Cityscapes, NYU, etc. - How well the model generalizes to unseen scales i.e. scales that are not in the range of $p(\varphi)$? - I suggest the authors add a section regarding their design choices for example (i) the prior scaling factor and (ii) the rescale module $R_\varphi$. Ablation experiments should be included as well in the main manuscript or supplementary. Citations: [1] Learning the Pareto Front with Hypernetworks, Navon et al. [2] HyperStyle: StyleGAN Inversion with HyperNetworks for Real Image Editing, Alaluf et al. [3] Personalized Federated Learning using Hypernetworks, Shamsian et al. [4] Controllable Pareto Multi-Task Learning, Lin et al. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - Line 105 - instead of $(C,\lceil\varphi H\rceil,\lceil \varphi H \rceil)$ should be $(C,\lceil\varphi H\rceil,\lceil \varphi W \rceil)$. - Line 128-129 adding $R_\varphi$ will make it clearer for the reader, in addition, consider moving these lines to ~Line 105 where you first present the notation $R_\varphi$. - Why not normalizing $\varphi$ such that $[0, 0.5] \rightarrow [0,1]$ this will increase $\varphi$’s resolution. - Did you try to use different prior over $\varphi$, $p(\varphi)$ instead of uniform? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: See weaknesses and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the constructive feedback and comments. Addressing the raised questions: ### Weaknesses: - **Prior Work** - We agree that the work cited in the review is relevant to our method, and we will revise our manuscript to draw the connections and contextualize the research. - **Pareto Front Learning** - The goal of our work is substantially different from that of the referenced work \[1\]. In particular, PFL focuses on jointly optimizing the Pareto frontier of multiple predetermined training objectives across a single architectural setting. In contrast, our approach is about optimizing a single objective (in our case segmentation accuracy) across a range of architectural settings that introduce various computational trade-offs. To the best of our knowledge, our work is the first to explore amortizing the cost of training models with different internal rescaling factors, a key aspect in convolutional model efficiency, and that has not been explored in any of the references. The technical obstacles associated with our goal are different from those faced by PFL. For us, metrics of interest like computational cost (FLOPs), required memory, or model latency are not differentiable, and thus cannot be easily incorporated in the learning process in the way the cited work does. Our approach allows practitioners to use the learned hypernetwork to efficiently generate the accuracy-cost frontier for their given hardware constraints, and then choose the model to be deployed based on validation metrics that do not need to be defined during the training process. - **Non-medical Datasets** - Our work focuses on medical datasets since in this domain it is common to train models from scratch, which is often computationally intensive. In contrast, natural image segmentation models typically use a pretrained encoder backbone trained at a fixed set of resolutions. - **Scaling shift** - Our model performs best when evaluated with rescaling factors within $p(\varphi)$. Outside this range segmentation performance degrades. However, the assumption is that if a scale is of interest it should be in the range of $p(\varphi)$. - **Ablations** - We include ablation experiments regarding the model architecture in Section C.1 of the manuscript. We will provide further design details regarding the resizing module $R_\varphi$. However, we are unsure about what alternative choices of rescale module  $R_\varphi$ would be of interest. We believe that bilinear interpolation is the most prevalent fractional resizing mechanism in the literature. ### Questions: - We are thankful for the corrections, and we will revise the manuscript to include them. - We do normalize $[0, 0.5] \rightarrow [0, 1]$ and then apply a $(x, 1-x)$ encoding. We will revise our implementation details. - We did explore using an *area uniform* prior $p(\varphi) \sim \sqrt{\mathcal{U}(0,1)}$ but we did not find any significant differences in the produced Pareto curves. --- Rebuttal Comment 1.1: Title: Reviewer response Comment: - **Non-medical Datasets** - I tend to disagree with the authors. In bio-medical as in other domains is all about personalization. This need comes up in different domains hence involving various datasets. When proposing a new approach, we as a community want to see that is scales through different domains, datasets, and learning setups. - **Scaling shift** - Can the authors share results on $p(\varphi)$ that is out of range? i.e. extrapolation. - Another question that came to my mind is in terms of the number of learned parameters and latency. Can the authors elaborate on the overhead of HN in terms of latency (FLOPs, Wall-time) and learned parameters? also include these stats for the baselines please. --- Reply to Comment 1.1.1: Comment: - **Non-medical datasets** - We agree that something that is universal is better than something that is not. However, we highlight that *biomedical image analysis* is its own substantial research field that includes a broad variety of domains. For example, dental x-rays are quite different from brain MRIs. Our experiments include results spanning several imaging modalities and different anatomical structures. - **Scaling Shift** - The table below presents out-of-distribution values of the rescaling factor ($\varphi > 0.5$) for our method trained with $p(\varphi) = \mathcal{U}(0,0.5)$ on all the segmentation problems. Dice score deteriorates as we increase the rescaling factor. This makes sense because these factors were not considered during training, and with lesser downsampling the model cannot effectively aggregate visual context. In practice, all scaling factors of interest should be included as part of $p(\varphi)$ during training. | $\varphi$ | CAMUS | OASIS | PanDental | WBC | |---------------:|:------------|:------------|:------------|:------------| | 0.50 | 0.90 (0.00) | 0.89 (0.00) | 0.94 (0.00) | 0.95 (0.00) | | 0.55 | 0.89 (0.00) | 0.89 (0.00) | 0.94 (0.00) | 0.94 (0.00) | | 0.60 | 0.88 (0.00) | 0.87 (0.00) | 0.93 (0.00) | 0.92 (0.00) | | 0.65 | 0.84 (0.00) | 0.83 (0.00) | 0.90 (0.00) | 0.90 (0.00) | | 0.70 | 0.77 (0.00) | 0.66 (0.03) | 0.86 (0.01) | 0.87 (0.01) | | 0.75 | 0.68 (0.01) | 0.44 (0.06) | 0.81 (0.01) | 0.84 (0.02) | | 0.80 | 0.61 (0.02) | 0.31 (0.05) | 0.78 (0.01) | 0.82 (0.03) | | 0.85 | 0.56 (0.02) | 0.25 (0.02) | 0.76 (0.01) | 0.79 (0.05) | | 0.90 | 0.51 (0.02) | 0.20 (0.01) | 0.75 (0.01) | 0.77 (0.06) | | 0.95 | 0.47 (0.02) | 0.16 (0.02) | 0.73 (0.01) | 0.74 (0.08) | | 1.00 | 0.42 (0.05) | 0.14 (0.02) | 0.70 (0.02) | 0.71 (0.09) | - **Efficiency Measurements** - Tables 2 & 3 in the supplement present inference and training costs, respectively. In Table 2 we present the inference cost per rescaling factor and the corresponding accuracy for our method and all baselines. In Table 3, we report the training cost of characterizing the Pareto accuracy-efficiency frontier for each method. In the table below, we report total parameter counts (including hypernetwork weights and FiLM parameters) as well as the number of primary network parameters for each method. We will revise the manuscript to include the parameter counts. In our experiments, training a SSHN model takes 10x less time than training a set of Fixed baselines, and only 1.8x more costly than training a single U-Net model. At inference time, the hypernetwork is only used once to generate the weights of the primary network, which is then used to make the predictions for different inputs, so the computational cost is identical between SSHN and Fixed for a given rescaling factor. | Method | Total Params | Primary Params | |:------------|:---------------|:-----------------| | Fixed | 109.4K | 109.4K | | Stochastic | 109.4K | 109.4K | | FiLM | 192.5K | 109.4K | | SSHN (ours) | 10.7M | 109.4K |
Summary: This paper proposes to learn a spectrum for CNNs with varying internal rescaling factors and demonstrates the effectiveness of the proposed approach in several medical image analysis applications including segmentation and registration with fixed and dynamic rescaling factors. Overall, the approach is simple but sound and powerful, and the empirical results clearly support the claim. I recommend the paper for acceptance. Strengths: - Very clear presentation. - Strong empirical results including one that nicely uncovers that many rescaling factors lead to similar results despite having substantially different inference costs for a variety of medical imaging tasks. - Simple but powerful method that can characterize the trade-off between model accuracy and inference efficiency faster and better than the existing approaches. Weaknesses: n/a Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - How would your results change if you used an architecture more recent than U-Net for the tasks you evaluated on? - Your proposed method seems to have a more smooth behavior compared to the other benchmarked resizing methods (Stochastic, FiLM) in Figure 3 -- do you have an intuition on why that might be? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the constructive feedback and comments. Addressing the raised questions: - **Primary Network Architecture** - We perform an architecture ablation experiment in Section C.1 of the supplement. For the OASIS dataset, we find similar trends to the ones of the U-Net architecture. - **Smooth behaviour** - We believe this is because the hypernetwork formulation lets our model adapt better to changes in resolution. We explored a related aspect in our *Weight transferability* experiment in Section 5.2, where we find that SSHN weights transfer well to neighboring rescaling factors. --- Rebuttal Comment 1.1: Comment: I have read the rebuttal by Authors and my assessment remains unchanged. Thanks for answering my questions.
Summary: CNNs, particularly those handling 3D data, can pose computational challenges due to their high expense. To tackle this, researchers frequently scale down the input data, a practice that often compromises accuracy. This paper presents SSHN, a technique designed to learn a variety of CNN models, each with unique scaling factors. With a marginal increase in training duration, SSHN is demonstrated to enhance the trade-off between accuracy and efficiency, making it a promising solution to the limitations of conventional methods. Strengths: * The paper is well-written, with clear visuals and figure captions. The clarity of the text aids the understanding of the proposed concepts. * The authors have chosen a reasonable variety of datasets for evaluation, strengthening the validity of their claims. * Baseline approaches considered in the paper are relevant, providing a solid ground for comparison. * The authors have diligently reported training five randomly initialized models, ensuring their findings are not reliant on a specific random seed. Weaknesses: * The paper states that CNNs are computationally intensive at inference time. However, in this reviewer's experience, the training time is often the more challenging aspect, especially with volumetric data. This warrants further clarification. * The memory footprint of SSHN in the volumetric data regime is usually a big concern, as it often leads to memory consumption issues. It is understood that the authors did not investigate this aspect. * It is unclear how the Fixed baseline was utilized during inference. For instance, it is not specified how the best rescaling factor is chosen for a specific dataset, or if the models are used as an ensemble. * The handling of 3D datasets (such as OASIS and CAMUS) in experiments is unclear. It is uncertain whether they are treated as 2D datasets, and if the authors have considered at all using a 3D U-Net as their primary model. If datasets were used as 2D images, this must be mentioned in the limitations section. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. At the moment, training time has been only discussed in the supplementary material. Can you comment on the computational burden of SSHN and its implications? Would this warrant updating the limitations section? 2. Could you provide information on the memory footprint of training the SSHN in the volumetric data regime? Would it be feasible? 3. Could you elaborate on how the Fixed baseline is used during inference time and how the rescaling factor is chosen? 4. Have 3D datasets in your experiments been treated as 2D datasets? Have you considered using a 3D U-Net as the primary model? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors have discussed certain limitations including the choice of model architecture and tasks. However, the authors could further elaborate on how method's training time and memory footprint for volumetric data, as this could be a significant limitation for practical applications. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the constructive feedback and comments. Addressing the raised questions: 1. **Training Cost** - We would like to clarify how the model development process is performed with our method: - Training - The hypernetwork is trained by generating the primary network weights from randomly sampled rescaling factors. - Scale selection - Once trained, the hypernetwork is used once per rescaling factor to predict the weights of each primary network. These weights are used to evaluate the accuracy on a held-out set of data, producing a Pareto accuracy-efficiency frontier. A rescaling factor is then chosen based on the trade-off characteristics, which determines which primary network parameters will be used. - Inference - A single set of primary network weights is used for inference at the chosen rescaling factor. The hypernetwork is no longer needed at this point, and does not contribute to the inference computational cost. In our experiments, training a SSHN model takes 10x less time than training the set of Fixed baselines, and is only 1.8x more costly than training a single U-Net model. Once trained, the hypernetwork is only used once to generate the weights of the primary network, which are then used to make the predictions for different inputs, so the computational cost is identical between SSHN and Fixed for a given rescaling factor. 2. **Memory Footprint** - We agree that we should say more about memory consumption. Our memory consumption measurements results follow similar trends to the FLOP measurements. Models with smaller rescaling factors substantially reduce memory consumption while maintaining predictive quality for a range of rescaling factors. At training time we sample $\varphi \sim \mathcal{U}(0,0.5)$, so our memory consumption is marginally more than that of a regular U-Net. At inference time, since we do not use the hypernetwork, our memory consumption is no more than that of a regular U-Net. 3. **Fixed baseline** - In our experiments, we do not chose a specific rescaling factor for each dataset, as that depends on the downstream accuracy-efficiency considerations. Therefore, in our experiments, we compare methods using the entire accuracy-efficiency frontier. For the Fixed baseline, we evaluate models independently, with each one of them corresponding to a separate rescaling factor and reported an individual datapoint (Figures 3,6 and 7). 4. **3D Data** - Yes, in our experiments, we used 2D mid-slices for the 3D datasets. We do this because 3D models are more computationally demanding, and the Fixed baseline requires training many individual models with different rescaling factors. Our method can be applied to 3D U-Net models, where the computational and memory improvements might turn out to be even more significant. We believe this is an interesting area of future research. --- Rebuttal Comment 1.1: Title: Response by Reviewer wkRF Comment: Dear Authors, thank you for the detailed rebuttal. Here are some of my own comments in line. > Training Cost Noted. My concern was around specifically training time. It seems that training for 1.8x times longer is worth the benefit of being able to produce models on the entire frontier. > 3D Data I believe venturing into 3D might be an ultimate challenge yet desirable for the most important medical applications. I was wondering if you can use your approach for any other hyperparameter, e.g. learning rate? Otherwise, I would like to one more time compliment the authors with a very good paper. --- Reply to Comment 1.1.1: Comment: Thank you for the comments. We believe that there are other efficiency hyperparameters that we could exploit with our amortized learning strategy. For instance, changing the number of downsampling steps or the number of feature channels per layer would change the efficiency characteristics while still preserving a high degree of similarity between the networks. In this work, we focused on the rescaling factor as it presented direct computational benefits, but we are excited about future research expanding these ideas to other hyperparameters.
Summary: The paper introduces a method that learns a spectrum of CNNs with different rescaling factors. The method relies on using a hyper-network to generate the parameters of the model for a given rescaling factor --- this enables the users to choose the desired accuracy-efficiency trade-off with a single architecture. While the method is general, the authors demonstrate in two challenging structured prediction tasks (namely, segmentation and registration) the benefits of their approach in terms of accuracy-efficiency trade-off while incurring a small amount of extra training cost. Strengths: - The proposed method is a creative adoption of hyper network to the practical challenge of enabling the users to choose the desired accuracy-efficiency trade-off at inference time - Moreover, a comprehensive set of experiments demonstrate SSHN attain considerably better accuracy-efficiency trade-off than relevant baselines in multiple datasets for two challenging tasks - The manuscript is written very clearly and its motivation is strong. Weaknesses: - I would like to hear a more detailed explanation as to why learning a single model with varying rescaling factors lead to a consistent improvement in accuracy over the models trained with a fixed rescaling factor. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - What are the additional computational costs of generating weights through a hyper-network at inference time? Is this included in the calculation of inference costs in Figure7? - It is unclear why FiLM is a sensible baseline to compare against. - Is the cost of training a single SSHN smaller than training multiple CNNs independently for fixed rescaling factors? If so, this could be highlighted in the manuscript. - While the simplicity of the proposed approach is a strength rather than a weakness, one could envision setting different rescaling factors in different parts of the architecture. Such extension seems almost trivial implementationally, but training the hyper-network may become more challenging. It would be great to hear your thoughts on the potential benefits and risks of increasing such degrees of freedoms. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the constructive feedback and comments. Addressing the raised questions: ### Weaknesses: - **Accuracy Improvements** - This is an important question, and we don't have a definitive answer. However, the results of the experiment described in Section 5.2, suggest that varying the resolution of intermediate features in the network induces a regularization effect similar to how image scaling techniques in data augmentation pipelines regularize learning. ### Questions: - **Inference Costs** - We would like to clarify how the model development process is performed in our method: 1. Training - The hypernetwork is trained by generating the primary network weights from randomly sampled rescaling factors. 2. Scale selection - Once trained, the hypernetwork is used once per rescaling factor to predict the weights of each primary network. These weights are used to evaluate the accuracy on a held-out set of data, producing a Pareto accuracy-efficiency frontier. A rescaling factor is then chosen based on the trade-off characteristics, which determines which primary network parameters will be used. 3. Inference - A single set of primary network weights is used for inference at the chosen rescaling factor. The hypernetwork is no longer needed at this point, and does not contribute to the inference computational cost. Therefore, since the hypernetwork is not used for performing inference, Figure 7 only includes the computational costs of the primary network. We tried to explain this in our method section (L117-119), and will revise our manuscript to better clarify this in the result section as well. - **FiLM baseline** - We compare against FiLM because recent work (You only train once: Loss-conditional training of deep networks \[13\]) employed FiLM modules as a way to perform amortized learning across multiple loss weightings, efficiently characterizing the loss Pareto frontiers. To the best of our knowledge, this is the closest baseline for the problem statement of learning a family of models in an amortized way. - **Training Cost** - Yes, it takes approximately 10x less time to train a single SSHN than to train the set of Fixed baselines independently. We report training times in Table 3 of the supplement. We will revise the text to better highlight this fact. - **Different Rescaling Factors** - We agree with the observation. We carried out this experiment and report results for the OASIS dataset in Section C.2 of the supplement. We found that while it is feasible to train a hypernetwork model with separate rescaling factors, it fails to meaningfully improve the accuracy-efficiency Pareto frontier while requiring longer training times to converge.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper presents Scale-Space HyperNetworks (SSHN), a method that predicts the weights for a segmentation network for range of rescaling factors. The proposed approach makes it possible to characterize the trade-off between model accuracy and inference efficiency faster, reducing the overall computational cost. Further, the paper demonstrates SSH demonstrate improved generalization on a variety of medical imaging tasks and datasets. Strengths: - Employing a function h with learnable parameters to map the rescaling ratio to a set of convolutional weights is an interesting and novel approach and the overall framework. - The paper is evaluated across two critical medical image analysis tasks and is demonstrated to perform better compared to fixed and other variable resizing methods. - Paper also demonstrate efficiency and other analysis, including varying prior width and weight transferability, which allows deeper understanding of the framework and I'm confident that future works in this direction can benefit from such analysis. Weaknesses: The weakness that I found in this work is a limited explanation of the method, which might create confusion and readability issues for the readers. Here are a few suggestions/questions to address them: 1. The implementation section should be clearly explained and might need to be expanded to include clear differences between how the network is trained and how it is used in inference or in evaluation. This is important because, in the later part of the paper, there are descriptions (line 198) like "once trained, we use the hyper network to rapidly evaluate a range of phi..". 2. The paper mentions how a hyper-network has more parameters. It would be nice to indicate how much more those parameters. 3. In Fig 2, the hyper network branch goes to both the encoder and decoder. But since there is a concat layer, how can a variable resizing factor be used? 4. The training time result should be included in the main text as this further strengthens the argument of the paper. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please look at weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the constructive feedback and comments. Addressing the raised questions: 1. **Model Development** - We will revise the manuscript to better explain the model development process and to differentiate between training and inference. To further clarify, the steps we take are: - Training - The hypernetwork is trained by generating the primary network weights from randomly sampled rescaling factors. - Scale selection - Once trained, the hypernetwork is used once per rescaling factor to predict the weights of each primary network. These weights are used to evaluate the accuracy on a held-out set of data, producing a Pareto accuracy-efficiency frontier. A rescaling factor is then chosen based on the trade-off characteristics, which determines which primary network parameters will be used. - Inference - A single set of primary network weights is used for inference at the chosen rescaling factor. The hypernetwork is no longer needed at this point, and does not contribute to the inference computational cost. We will revise the text to make this distinction more explicit and clear. 2. **Parameter Counts** - We will incorporate additional detail about the parameter counts. In our experiments, the hypernetwork model has approximately 100x more learnable parameters than the Fixed baseline. Importantly, however, the hypernetwork predicts the weights of a primary network that has identical structure as the Fixed baselines. 3. **Decoder Resizing** - For resizing layers in the decoding branch, the feature maps are resized to match the spatial dimensions of the tensors from the skip connections that are concatenated afterwards. We will revise the text to better describe this detail of the implementation. 4. **Training Runtime** - We will revise the results section to incorporate training runtimes. --- Rebuttal Comment 1.1: Title: Official comment Comment: I have read the rebuttal from authors and other reviewers and my score remains unchanged. Thanks!
null
null
null
null
null
null
The emergence of clusters in self-attention dynamics
Accept (poster)
Summary: For $Q,K,V$ fixed and different structures of $V$, analyze the distribution of $x(t)$ as $t\to\infty$, where tokens are seen as particles and the self-attention mechanism is seen as particle interaction, i.e. as a McKean-Vlasov SDE. The conclusion is as $t\to\infty$, i.e. going through the layers, $x(t)$ converges to a clustered configuration, and the attention matrix $P$ becomes low-rank, i.e. tokens depend on few tokens. Strengths: Following [LLH+20] and [SABP22], the authors interpret self-attention and transformers through the lens of interacting particle systems/ McKean-Vlasov SDEs. This is perhaps the first paper to show via the above interpretation the emergence of clusters/ leaders in transformers. Weaknesses: 1. Please state within the main paper the relationship between the clustered configuration and the initial configuration. 2. The result in [SABP22] seems to be the opposite of clustering. Please let me know if I misunderstood. Otherwise please comment on the clustering/ non-clustering effects and use-cases for downstream applications. 3. Please remark on why "transformer dynamics present unique mathematical challenges that cannot be addressed using the tools developed for these more primitive models. (lines 147-148)." 4. Please strengthen the contribution with layer normalization. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Please comment on the effect of different embeddings. (since tokens converge to the boundary and vertices of a convex polytope, and the polytope depends on the initial distribution of tokens and the embedding). 2. Please give some intuition on what the vertices mean. (are vertices more descriptive since everything inside is a convex combination of the vertices? how is the final distribution on the vertices related to the initial distribution? e.g. is the mean unchanged? will it speed up training/ execution if a neural network was designed to give the convex polytope and the weights on the vertices instead of repeated iteration of self-attention?) 3. Please comment on the downstream effects of "linearly separable representation of tokens (line 266-267)". Perhaps $x(t)$ as $t\to\infty$ can be used as new embeddings. 4. Read the proof for -V in the appendix. Would it also help to think of clustering dynamics in terms of potential and thus flipping the sign of $V$ flips the potential landscape? 5. Please comment on the shrinking convex hull/ polytope (since it is finite) (Appendix C.1.1). What about infinite number of tokens? 6. If the tokens cluster, why would depending on one token (low rank attention matrix $P$) within a cluster be more likely than depending on another token within the cluster? Or is it because $P$ is low rank and thus tokens cluster? Or a combination of both effects? 7. Please comment on the effects of skip and non-skip connections [DCL21]. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Authors addressed the limitation of fixed $Q,K,V$, simple structures for $V$. Authors mentioned multi-head attention as future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback, and respond to each point individually below. **Weaknesses.** 1. We do not believe there necessarily is a relation, as there is no easy way to predict the clustered configuration from the initial one beyond simply running the dynamics. Moreover, the number of clusters is often not equal to the number of vertices of the polytope given by the convex hull of the initial sequence. 2. In our opinion the results in [SABP22] are of a slightly different nature. The authors show, by virtue of an additional bandwidth hyperparameter $\varepsilon$ in the exponential, that the continuity equation for the transformer dynamics (replacing self-attention by the Sinkform kernel) can be seen as an approximation for the heat equation ($V=I_d$) when the bandwidth goes to $0$. In our case, the bandwith is fixed and equal to $1$. We believe that in practical applications, the self-attention mechanism serves a dimension-reduction purpose. Our clustering theorems can be seen as an indicator of this thesis. 3. The related models which we mention (Vicsek, Cucker-Smale, and so on) have significantly more basic interactions--often involving symmetric interactions and a radial dependence--, thus rendering the pre-developed mathematical tools inapplicable to our setting. 4. Layer normalization amounts to considering the dynamics on the unit sphere on $\mathbb{R}^d$. In ongoing work, we have observed, both theoretically and numerically, that clustering persists in a sense much alike the results presented in the present paper. The proof techniques are however different due to the intrinsic symmetries entailed by the sphere, and thus involve significant additional developments. --- **Questions.** 1. It is not clear how the limiting convex polytope depends on the initial distribution of tokens even without the consideration of different embeddings. We believe that this could be an exciting avenue for future research. 2. We do not believe that the vertices of the limiting polytope have a clear meaning as they are not straightforwardly related to the initial distribution, as alluded to in previous answers. Slight changes in the initial distribution can change the limiting polytope. The mean is not preserved. We thank the Reviewer for the last observation which is very compelling--indeed, should one manage to gather a clearer understanding of how the limiting polytope depends on the initial distribution and on $Q^\top K$, the repeated iteration of self-attention could perhaps be replaced in training and this could be seen as a possible convex relaxation. 3. This is a possibility, we thank the Reviewer for this observation. What we had in mind was to potentially use transformers as a plain clustering algorithm in the spirit of the mean-shift algorithm, or perhaps for binary classification tasks, by considering the inputs of the entire dataset as an input sequence. 4. Yes, there is a natural potential, or more precisely a natural Lyapunov function $\mathscr{L}$, which appears in the proof of Lemma C.8. The function $\mathscr{L}$ is non-increasing when $V=-I_d$, which corroborates the global decay of the norms of the points toward $0$ (Theorem C.5). When $V=I_d$, the function $\mathscr{L}$ is non-decreasing along the flow of the original dynamics (15), and indeed all points tend to diverge toward infinity when evolved through this dynamics, as a corollary of Theorem 3.1 after removing the rescaling. 5. With an infinite number of tokens, there is also shrinking of the convex hull as in Proposition C.2. The limiting shape is convex, but it is not necessarily a convex polytope, it may be for instance a sphere. 6. There is empirically one leader in each cluster, but it is difficult to know which one it is without running the full dynamics. A limiting low-rank self-attention matrix allows to identify not only clusters, but also leaders within the clusters by reading the rows and columns. When we establish clustering, except for the case $d=1$ we are not able to prove convergence toward a low-rank self-attention matrix, although this is what we observe in practice. 7. The outcome of [DCL21] is that without skip connections, the output of the transformers dynamics converges doubly exponentially to a rank-1 matrix. Therefore, [DCL21] highlights the necessity of skip-connections for transformers to be efficient. [DCL21] proves clustering toward a unique cluster for transformers without skip-connections; our work proves that clustering occurs for transformers with skip-connection, which are the ones used in practice. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for answering my many questions. I am positive towards this paper and am raising my score to 7. I have read all the other reviews and rebuttals. May I ask what is the ``probabilistic'' structure of the self-attention mechanism in the rebuttal to reviewer Qc4C? Thanks! --- Reply to Comment 1.1.1: Comment: We thank the Reviewer again for their feedback. With regard to the question - what was meant by "probabilistic" is the fact that the dynamics are given by a convex combination of the tokens, namely, the self-attention coefficients are in $[0,1]$ and add up to $1$. We hope this clarifies our comment.
Summary: This paper analyses the self-attention mechanism in (trained) Transformers under the lens of dynamical systems. The authors focus on a bare-bone self-attention architecture without the bells and whistles of standard Transformers (e.g. multi-head attention, layer norm) and assume time-independent weights, i.e. shared weights across "layers". They start their analysis by studying simple self-attention architectures, including 1d settings, and progressively move to more complex and realistic settings, including value matrices with simple and positive leading-eigenvalue. Across all the scenarios studied, the authors observe that in the limit, tokens cluster towards a few objects, such as vertices of a polytope or hyperplanes. Thus, the authors mathematically confirm that a small number of leaders drive the transformer dynamics. Strengths: This work focuses on the mathematical under-pinnings of self-attention and transformer architectures, a research direction that remains unexplored, yet is much needed, given the importance of transformer architectures in modern deep learning. Such a theoretical foundation is original and significant for the community. The authors showcase the clustering behaviour of self-attention in progressively realistic scenarios. The main points and conclusions of this work are clear. Weaknesses: Overall, it is unclear whether the assumptions used in to prove theorems (e.g. good triples, $Q^T K \succ 0$) are realistic for standard Transformers. Furthermore, the analysis is performed assuming only time-independent weights. While that design choice has been used in practice (e.g. ALBERTA), it is not the standard design choice for transformers. It is unclear whether the theorems introduced in this work hold for time-dependent weights. The paper is very math-heavy for a general machine learning audience, and as such it is often hard to follow. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. In eq. 4, the authors introduce an exponential factor, which is "instrumental in the proofs of all results that follow". Would the theorems hold without the use of this factor? What are the practical implications of this additional factor? 2. Furthermore, according to the authors, this operation is a "mathematically justified surrogate for the layer normalization". Do the authors expect this rescaling operation to be useful in an actual self-attention implementation, either replacing layer norm or used along with it? 3. In Definition 2, the authors define what constitutes a good triple $(Q, K, V)$. How realistic is this assumption, and how robust is the theorem to the assumption holding approximately? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors should focus more on the limitations of the assumptions used for the theoretical results introduced in this work. It is unclear whether the theorems hold in practice, or whether they are based on very strong and unrealistic assumptions. See also weaknesses and limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback, and respond to each point individually below. **Weaknesses.** We have attempted to paint a more complete picture on the necessity of some assumptions made to facilitate the development of this theory through numerical experiments (Figures 1, 2, 3 of the PDF) and additional comments (e.g. the answer to Question 1 by Reviewer Qc4c). --- **Questions.** 1. The main effect, namely the emergence of leaders (visible in the self-attention matrix) remains true without the rescaling factor, see Theorem 2.1. Without the rescaling factor however, there is no clustering: this rescaling factor, in practice, has an effect of normalization on tokens, without which tokens typically diverge to $\pm\infty$. 2. We indeed believe that the exponential rescaling can be used in actual implementations, as a surrogate to layer norm. (There is a slight difference however in the sense that layer norm amounts to having particles evolving on the unit sphere of $\mathbb{R}^d$, hence bounded in all directions.) It could potentially be used in conjunction with layer norm, but it is not totally clear to us what this would entail. 3. We recognize that point (i), namely $Q^\top K\succ 0$, appears stringent. This assumption is made since the proof follows the lines of that of Theorem 3.1 (which is conceived on elements using this condition). However, experiments indicate that (i) is not necessary for the conclusion of the theorem to hold. Point (ii) is a generalization of the first bullet point in Definition 1 (the latter is satisfied by some pre-trained value matrices of individual heads in ALBERT, see Figure 10 in Supplementary Material). Assumptions on $V$ are generally more sensitive to perturbations than assumptions on $(Q, K)$. Should the eigenvalue $\lambda$ with largest real part be negative, all rescaled particles will diverge to infinity. Should $\lambda$ be complex, we do not expect any clustering phenomenon. Yet none of the conclusions seem to change if $Q^\top K$ is taken arbitrary. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer 8epq Comment: I would like to thank the authors for their rebuttal; they have addressed my questions. I am still positive towards this paper and I think its contribution is significant. I am keeping my score to 6. I hope the authors incorporate some of the discussion in the camera-ready version of the paper.
Summary: In this work, the authors develop a theoretical analysis of self-attention mechanism. In particular, the authors study the setting of a trained transformer and their goal is to characterize the output of a deep transformer with multiple layers of self-attention. For simplicity authors focus on weight sharing and also do not use MLPs and multiple heads. The key results include: i) the convergence guarantees of self-attention matrix to a low rank matrix. ii) the characterization of convergence of initial inputs to final outputs as a function of query, key and value matrix, In their characterization, the authors prove a clustering structure. For instance when value matrix is identity, then the authors show that the depending on the initial condition the output of dynamics of self attention converges to one of the vertices of a polytope. In another setting, the authors show that the output converges to a point that lies on one of three hyperplanes. All of these results point to a clustering phenomenon at the output of the transformer. This phenomenon has strong parallels to neural collapse studied in standard supervised learning setting. Strengths: This is a very well written paper. It studies a very important problem of understanding the core of transformers, i.e., self attention. Originality: The paper has quite an original idea and approach to solve the problem. Quality: This is a high quality paper that can inspire important developments on the path of demistifying transformers. Clarity: The paper is a joy to read. Significance: I believe that the paper takes an important step in the right direction and can inspire the theory community in machine learning to furthering our understanding of transformers. Weaknesses: The paper does not have significant weaknesses. I do have some questions and concerns that I would want to ask. 1. I see that the convergence analysis and result that self attention converges to a low rank matrix is given for the case of d=1. What about the more general setting? This seems to be true empirically but what can we say in theory. 2. I see that the authors state that discrete time system's analysis is straightforward. Intuitively, it feels not so straightforward and should rely on assumptions on length of step size. Can the authors elaborate? 3. The authors shed some light on the ALBERT's pretrained matrices satisfying the condition in the theorem in some cases. A more extensive analysis of this can be quite useful to understand if these assumptions are indeed necessary to achieve such a clustering structure in practice or they are a matter of theoretical convenience. 4. I would appreciate if authors could add some more intuition on what is geometric meaning of Definition 1. 5. While I understand that authors have focused only on self attention purely. This makes a lot of sense from theory point of view. From an empricial point of view, I believe there is value in doing numerical experiments to see how sensitive are the clustering illustrations to addition of an MLP layer on top. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Please see the weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors point to several open problems and directions. They can be interpreted as gaps that can still be filled. A more extensive discussion on limtiations can be useful. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback, and respond to each point individually below. **Weaknesses.** 1. A result similar to Theorem 2.1 certainly holds for $d\geq 2$ and $V=I_d$, but its statement and proof would need the exclusion of numerous pathological and non-generic initial configurations of tokens. The proof is anticipated to be highly technically challenging (and admittedly tedious), and we have chosen to postpone it to future work. 2. We agree with the referee, we will give more details on the discrete-time setting in the camera-ready version. Note that we impose that $I_d+V\Delta t$ is invertible, and this is implicitly is an assumption on the step size, which holds for instance for sufficiently small $\Delta t$. Let us give here some details on the proof of Theorem 3.1 in the discrete-time setting, following the intermediate results of Appendix C.1. First, Proposition C.2 (convex hull shrinkage) holds intuitively because for any $i$ and $k$, $z_i^{[k+1]}=\frac{1}{1+\Delta t}(z_i^{[k]}+\Delta t\sum_{j} P_{ij}^{[k]}z_j^{[k]})$ belongs to the convex hull of the $z_j^{[k]}$. Then we define the candidate set of limit points as in (35), and Claim 1 holds without any change in the statement and in the proof. Then, as in Steps 2 and 3 in Section C.1.2, we first prove that if $z_i^{[k]}$ is not already close to one point of the candidate set, it will keep moving toward the boundary, and finally we prove that tokens cannot circulate indefinitely between different points on the boundary. This proves convergence of each token toward a point of the set given by (35). 3. We believe that some assumptions are purely made out of theoretical convenience. As indicated to Referee Qc4c in Weakness 1, we have carried out additional experiments that supplement our claims beyond assumptions made for the theory. For instance, the assumption $Q^\top K\succ 0$ in Theorems 3.1 and 5.1 does not appear necessary in experiments. 4. The first condition in Definition 1 means that $V$ has one dominant eigenvalue: the evolution, which shares some similarities with the ODE $\dot{x}=Vx$, is quicker in the eigenspace $\mathrm{span}(\varphi_1)$ (suppose $V$ diagonalizable for simplicity) of the dominant eigenvalue. Therefore, after some time (or equivalently, after a few layers), the tokens have the corresponding coordinate much larger than the other coordinates, and this is one of the key ingredients in our analysis. The second condition means that $Q^\top K$ is positive definite but solely on the subspace $\mathrm{span}(\varphi_1)$. This condition is useful to guarantee that the components of $x_i$, $x_j$ along $\mathrm{span}(\varphi_1)$ represent the principal contribution to the weight $\exp(\langle Qx_i,Kx_j\rangle)$. 5. This can be done in two ways. The first way consists in first running the pure self-attention dynamics up to time $T$ (or equivalently, for $O(T)$ layers), with then applying a pure MLP to the concatenated vector of clustered features at time $T$. This amounts to seeing the MLP as a map from $\mathbb{R}^{nd}$ to $\mathbb{R}^{nd}$, which can be studied independently by existing theory. The second way consists in alternating the self-attention and the MLP layers. In this case, clustering in the sense of Theorems 3.1 and Theorems 5.1 should not be expected since the weights of the MLP play the role of a value matrix $V$, and the conclusions of these theorems strongly depend on the identity-like structure. In Figure 3 of the PDF we illustrate an extension of Theorem 4.1 in the presence of such an MLP layer. --- Rebuttal Comment 1.1: Title: Thanks Comment: I thank the authors for these clarifications. I would appreciate if the authors can revise the paper in the light of the above discussion. I am happy to maintain my score for the paper.
Summary: This paper studies the asymptotic behavior of a sequence of tokens processed by infinitely deep self-attention only Transformers, viewed as interacting particle systems (1). The authors first study the one dimension case and show that the self-attention matrix converges to a low-rank boolean matrix. They then focus on particular choices of the value matrix $V$ to obtain clustering in higher dimension. Specifically, when $V = I_d$, the paper shows that a time-rescaled version of the tokens converge to the boundary of a convex polytope. The paper then focuses on more realistic choices of $V$. When it has a single leading eigenvalue, the clustering happens toward one of at most $3$ hyperplanes. When the leading eigenvalue has multiplicity, the limit geometry is a convex polytope in some directions and a linear subspace in the others. All the theoretical results are numerically illustrated. Strengths: 1. The paper is very clear and well written. 2. The overall contribution is strong, as it is the first paper provably showing the emergence of clusters in self-attention only Transformers. 3. More specifically, each theorem, (well summarized in Table. 1) is an interesting result of clustering in interacting particles systems. 4. Most assumptions on (Q, K, V) are clearly discussed. 5. The figures illustrating the theorems are well presented and insightful. Weaknesses: 1. Regarding the assumption in Th. 3.1 and 5.1 that $Q^TK > 0$ is positive definite: I think that the strength of this assumption should be emphasized. See Questions. 2. An intuitive explanation (as for instance the one made after th 4.1 (l. 257 to 262)) would be welcome after th 3.1. Especially, how does the convex polytope $K$ depend on $Q^TK$ ? Typo: l. 242, I believe a $Q$ is missing in the quadratic form. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. What is the main obstacle for considering the time-dependent dynamics where $Q$, $K$ and $V$ depend on time, when $d = 1$ for instance ? Are there some assumptions that can be made on the time dependency to obtain similar results ? 2. Regarding Weakness 1. For instance, in practical implementation, I believe $Q^TK$ is low-rank. Maybe it should also be clarified that by $Q^TK > 0$ you also mean symmetric ? In practical implementation, is $Q^TK$ close to a symmetric matrix ? 3. In Th 4.1, you do not assume anymore that $Q^TK$ is symmetric ? Have you verified whether the second condition in Definition 1. holds on some pre-trained triple (Q, K, V) on ALBERT ? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback, and respond to each point individually below. **Weaknesses.** 1. We recognize that this assumption is substantial. However, it does not appear to be necessary for our conclusions; rather, it serves to direct the proof. To reinforce the broader validity of our conclusion beyond just this specific assumption, we have carried out additional experiments (Figures 1, 2 in the PDF), suggesting that our clustering results are more universal. 2. Theorem 3.1 amounts to a couple of effects entailed by the dynamics. First of all, the convex hull of the particles is shrinking over time (Proposition C.2). This is due to the fact that the distance of the particle nearest to any half-space (not containing the particles) is decreasing. On the other hand, the convex hull ought not collapse since particles which have not concentrated near the boundary of the limiting polytope will continue to increase in magnitude until they themselves reach this boundary (Step 2 in the proof). The latter is due to the time-rescaling and the "probabilistic" structure of the self-attention mechanism. (A version of this comment will be added to the camera-ready version.) As for the convex polytope $\mathcal{K}$--it depends both on the initial sequence of particles and on $Q^\top K$. Unfortunately, we do not believe there is a way to predict $\mathcal{K}$ explicitly besides running the full dynamics. We also thank the Reviewer for spotting this rather important typo. --- **Questions.** 1. We believe that if we assume that $Q(t)^\top K(t)$ is bounded from below and above by positive multiples of the identity uniformly in $t$, the same conclusions as in Theorems 2.1, 3.1, 4.1 and 5.1 hold. This requires adaptations in the proofs. However, the conclusions of the theorems do not hold in general if $V$ depends on time. 2. Indeed $Q^\top K\succ 0$ means in particular that $Q^\top K$ is symmetric (the partial order within the set of symmetric matrices). We agree with the Reviewer that this assumption is at odds with the low-rankness in practical implementations, but, as alluded to in the answer to Weakness 1, we do not believe that it is essential to the result. However, at this moment, we can't identify a clear method to eliminate it from our proof. It does not appear evident that the pre-trained ALBERT matrices are approximately symmetric. 3. Indeed, no symmetry or positive-definiteness of $Q^\top K$ is required. Since we only look at the behavior of the particles along the direction given by $\varphi_1$, it suffices to assume that the induced quadratic form is positive only along said direction. With regard to the pre-trained triple: the first point is satisfied by $V_5$ and $V_{14}$ (indicated in the Supplementary material), and the second point is too satisfied by $(Q_5, K_5)$ and $(Q_{14}, K_{14})$ (the inner products evaluated at the eigenvector of norm 1 equal 1.3060 and 0.6719 respectively). In other words, the triples $(Q_h, K_h, V_h)$ corresponding to heads $h=5$ and $h=14$ in ALBERT satisfy the assumptions of Definition 1. This comment will be added to the Supplementary Material of the camera-ready version.
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful feedback. We echo their concerns with regard to several assumptions we had made on the weight matrices for our analysis. All in all, our goal was to consider the simplest setting of transformers amenable to rigorous mathematical analysis. To enhance the validity of the clustering theorems, we have conducted a few numerical experiments which violate the assumptions made for the development of the theory and observed that the clustering pattern persists. Namely, - In Figure 1, we replicate the experiment illustrating Theorem 3.1 in the setting where $Q^\top K$ is a random matrix with entries sampled from $\mathrm{Unif}([-1,1])$. The clustering pattern persists even for this choice of weights. - In Figure 2, we replicate the experiment illustrating Theorem 5.1 in the setting where $Q^\top K$ is a random matrix with entries sampled from $\mathrm{Unif}([-1,1])$. The same clustering pattern appears. - In Figure 3, we add an additional MLP layer in the setup of Theorem 4.1. More precisely, we use a 2-layer neural network: we apply a component-wise activation function (either ReLU or Tanh), and then multiply by a time-independent weight matrix $W$. We again see clustering, the pattern depending on the weight matrix $W$ and on the activation function. In particular, in the first row (corresponding to $W=I_d$ and ReLU), we see that the particles first evolve as to reach the upper right quadrant $\mathbb{R}^d_{>0}$ (due to the ReLU). Once they reach it, every particle eventually follows one of three hyperplanes determined by the spectrum of $V$ and the projection onto $\mathbb{R}^d_{>0}$. In the other two cases, all the particles appear to collapse to $0$. All experiments were conducted following the setup presented in Appendix F of the Supplementary Material. They will be added and documented in the Supplementary Material of the camera-ready version. We comment on the difficulty of generalizing the theory to more general scenarios in the point by point replies to individual reviewers. Pdf: /pdf/402a481f0f8934feb62e938327c65f05d1a7d32d.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
FourierHandFlow: Neural 4D Hand Representation Using Fourier Query Flow
Accept (poster)
Summary: The paper proposes a method to reconstruct 4D hand (3D hand sequence) from a short RGB sequence with two types of Fourier Query Flow (pose flow and shape flow). In the Fourier Query Flow, the 3D trajectory of each point is transformed into 3 Fourier series along the time dimension and represented with the first few coefficients (3*(2*6+1) = 39 in this paper). Pose flow is generated with joint flow via LBS. The geometry is represented in the canonical space with a pretrained occupancy network and warped into the real space with the pose flow. Then shape flow adds small displacements to it. Experiments demonstrate that the proposed method outperforms existing methods and can produce continuous and smooth results. Strengths: Originality: The idea of represent 3D flow with Fourier series is novel and interesting. Quality: The presented results is of high-quality. Clarity: The paper is well-written and easy to follow. Significance: It seems that this method prevents the results from temporal jitters or abrupt motions, which I’ll discuss in Questions. And it’s very computational efficient. Weaknesses: I do not see some major weakness. But I still have some questions about this method. I’ll leave them to the Question part. And it would be great if authors can add these discussions to the paper. A typo: line 228 t()^2 ? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. It seems that using the low frequency representation can prevent temporal jitters or abrupt motions. However, lack of high frequency parts may lead to Gibbs phenomenon. So, I want to see some discussion about this. Does your method still work in the swift movement situation? 2. It seems that the shape flow is working in the real space (not the canonical space), which is different from some methods, like MANO. So, I wonder if it works for different hand shapes, like longer finger or wider palm. If it works, I’d like to see some examples with different hand shapes. 3. How about applying Fourier Query Flow to MANO model directly but not a pretrained occupancy network? I don’t see why using occupancy network would be better in this scenario. I’d like to see more discussion on this. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: I appreciate the authors’ discussions about limitations of the proposed approach in Sec. 5, which helps the understanding of the suitable scenarios. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful feedback and finding that the proposed method is “novel and interesting” [vgcD] and shows “large improvements” [rxcm, g8GH]. We also appreciate [rxcm] for finding that the “paper is clearly written and well-motivated.” In what follows, we address the concerns of the reviewers. --- > **Weakness 1.** "A typo: line 228 t()^2 ?" **Reply:** ^2 denotes *footnote 2*. We will change the footnote symbol for better clarity in the revision. Thank you for pointing this out. > **Question 1.** “It seems that using the low frequency representation can prevent temporal jitters or abrupt motions. However, lack of high frequency parts may lead to Gibbs phenomenon. So, I want to see some discussion about this. Does your method still work in the swift movement situation?” **Reply:** While Gibbs phenomenon *theoretically* occurs when approximating a discontinuous function using a finite Fourier Series, we have observed that the function that we aim to reconstruct -- the hand motion captured in InterHand2.6M [26] -- is mostly smooth enough to be reconstructed with a number ($N=6$) of Fourier terms. In Table S1 in the supplementary, we quantitatively showed that our shape reconstruction accuracy does not noticeably increase when using a higher number of Fourier terms than $6$ on InterHand2.6M dataset. Also, *Sequences 1 and 2* of our supplementary video (0:13-0:42) qualitatively show one of the swiftest hand motions captured in the dataset, and we have not *noticeably* observed the Gibbs phenomenon in the reconstructed motion. However, if one has to reconstruct higher-frequency motions (other than the hand motions in InterHand2.6M), it would be desirable to use a higher number of Fourier terms ($N>6$) in our method. We will add this discussion in the revision. > **Question 2.** “It seems that the shape flow is working in the real space (not the canonical space), which is different from some methods, like MANO. So, I wonder if it works for different hand shapes, like longer finger or wider palm. If it works, I’d like to see some examples with different hand shapes.” **Reply:** As you mentioned, our hand shape reconstruction itself is not constrained by MANO [34] PCA subspace. However, we have found that the hands captured in InterHand2.6M [26] dataset do not contain significant shape variations across the identities. Although our implicit function-based method leads to better reconstruction results than the MANO-based prediction method [45] in Table 1 in the paper, we will try to show reconstruction examples with more diverse hand shapes using the *very recently-released* dataset [R1] containing more hand shape variations. Thank you for your valuable comment. [R1] Potamias *et al.*, Handy: Towards a high fidelity 3D hand shape and appearance model, In CVPR, 2023. > **Question 3.** “How about applying Fourier Query Flow to MANO model directly but not a pretrained occupancy network? I don’t see why using occupancy network would be better in this scenario. I’d like to see more discussion on this.” > **Reply:** We thought that one of the important advantages of Fourier Query Flow is its ability to inherently model the **continuous** shape deformations along the temporal axis, e.g., allowing temporal shape inter- and extrapolation (Figure 5 in the paper). Although it is still possible to apply Fourier Query Flow to the MANO model (which is spatially discretized) directly, we wanted to preserve the continuity in the representation along both the spatial and the temporal axes — to keep the consistency between the axes and also to follow the motivation of the existing 4D continuous representations [14, 15, 29]. For example, we have observed that our method can generate more plausible hand shapes due to the capability for reconstruction in an arbitrary spatial resolution -- as shown in Figure R1 (in PDF). We will make sure to add this discussion in the revision. --- Rebuttal Comment 1.1: Title: Comment to the rebuttal Comment: I thank authors' time and effort to answer my questions. The authors' replies address most of my concerns. However, there is still one thing I want to point out. In my Question 2, what I really concern is why p = LBS(p') + \delta p (this is a simple version of Equation 5 in your paper),but not p = LBS(p' + \delta p). Why is it designed this way? Theoretically, the former one won't work well on shape variations. And I think Fourier-->inputCond in your experiments is not the latter one ( p = LBS(p' + \delta p) ), which is what I mean. So, I maintain my score for this work. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable comment. Let Eq. (1) be our current shape formulation $\mathbf{p} = \mathrm{LBS}(\mathbf{p}’) + \Delta \mathbf{p}$, and let Eq. (2) be the suggested shape formulation $\mathbf{p} = \mathrm{LBS}(\mathbf{p}’ + \Delta \mathbf{p})$. The main difference between Eq. (1) and Eq. (2) lies in whether to predict the shape variations (modeled by $\Delta \textbf{p}$) in the unposed space (before applying LBS) or in the posed space (after applying LBS). Since our method performs image-based hand reconstruction using pixel-aligned features, we thought that modeling such shape variations in the posed space — which is more directly aligned with the input image space — would be more effective. Also, the shape formulation in Eq. (2) would require our LBS network to learn implicit LBS fields dependent on the canonical shape variations (rather than one pre-trained LBS field as in our current method). Thus, we thought that the formulation in Eq. (1) would be a simpler solution. Yet, we will also try to investigate the suggested direction in our future work. Thank you again for your thoughtful comment.
Summary: This paper introduces FourierHandFlow - an implicit 4d representation for learning spatio-temporal hand shape deformations. The core idea is to introduce a coarse-to-fine implicit deformation model parameterized with a fixed Fourier basis to ensure smoothness and efficient inference. The coarse (pose / joint flow) part models the dynamics of the joints conditioned on images and pose predictions, and the fine part models the dynamics of the per-query point deformations on top of the joint flow, conditioned on image features. Experimental evaluation is conducted on InterHands2.6M and indicates that the proposed method outperforms recent baselines in terms of quality of shape reconstruction. Strengths: - Overall coarse-to-fine flow formulation generally makes sense, as most of the local per-point deformations would be strongly dependent on the joints. - Proposed formulation uses a fixed Fourier basis parameterisation of the flows is interesting, and bears multiple benefits: a) it allows for more efficient inference, since re-sampling points over temporal domain can be computed in closed form for different timesteps. b) it is guaranteed to lead to smooth trajectories by construction. - Implicit 4D formulation leads to automatic learning of correspondences, which allows texture transfer. - Quantitative results indicate that the proposed method outperforms multiple recent baselines (although there is not enough clarity on the evaluation protocol, see below). Weaknesses: 1. The paper is at times hard to follow. - For example, when introducing the method (L136), authors do not really specify an exact form of the underlying representation, and vaguely refer to it taking a sequence of RGB frames as input. Does the overall method take any other sources of supervision or conditioning? - Similarly, from Section 3.1 it is not really possible to understand what is the ground truth used to pre-train the occupancy and LBS functions. Do you have geometry ground truth here? What is the resolution of this ground truth (a short look on InterHand2.6 suggests that geometry that is provided is coming from MANO)? - After going through the experimental section I am still not sure I understand what is the ground truth geometry that this paper is comparing against. 2. Motivation / quality. - The quality of the resulting geometry seems to be of low resolution and does not contain any high-frequency details. - The main argument for using more complex machinery of implicit 4D representations over explicit mesh-based representations (L108) is the ability of those to capture high-resolution geometry details. - Yet, the resulting meshes seem to contain no high-resolution details, and examples e.g. in Figure 3 show that all the considered methods struggle at rough alignment with the images - which should be possible to solve with more robust keypoint / segmentation constraints. - The question thus arises if there is enough motivation for the development of the complex implicit machinery. If the resolution is the only limiting factor, why not upsample the mesh of MANO and fit it to ground truth scans, followed by a simple image2shape regression? 3. Evaluation / baselines. - There is no comparison to a parametric model MANO fitted to sparse constraints such as keypoints / segmentation. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: - Could you please clarify what exactly is the geometry you are using for training the occupancy/lbs models and for evaluation? If it is MANO fitted shapes, then, if your model does not have any appearance, I am confused how can it learn any more detailed reconstruction than the underlying low-resolution parametric model? - Why is there no comparison to a simple MANO model fitted to whatever InterHand2.6m constraints are available? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: Authors discuss limitations and broader societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful feedback and finding that the proposed method is “novel and interesting” [vgcD] and shows “large improvements” [rxcm, g8GH]. We also appreciate [rxcm] for finding that the “paper is clearly written and well-motivated.” In what follows, we address the concerns of the reviewers. --- > **Weakness 1-1.** “For example, when introducing the method (L136), authors do not really specify an exact form of the underlying representation, and vaguely refer to it taking a sequence of RGB frames as input. Does the overall method take any other sources of supervision or conditioning?” > **Reply:** During training, our method takes as input a sequence of monocular RGB frames (lines 136-137) and outputs 4D hand shapes in the form of an occupancy field, which is supervised by the ground truth 4D hand geometry (i.e., a temporal sequence of 3D hand geometries) of InterHand2.6M [26] dataset (lines 258-260). During testing, we only use a sequence of RGB frames as input to our method, as our main goal is to reconstruct 4D human hands from monocular RGB sequences (lines 19-20, 36-37). > **Weakness 1-2.** “Similarly, from Section 3.1 it is not really possible to understand what is the ground truth used to pre-train the occupancy and LBS functions. Do you have geometry ground truth here? What is the resolution of this ground truth?” > **Reply:** The same InterHand2.6M [26] hand geometries are used for pre-training the occupancy and LBS functions. These ground truth shapes are given as meshes with the MANO [34] topology. > **Weakness 1-3.** “After going through the experimental section I am still not sure I understand what is the ground truth geometry that this paper is comparing against.” > **Reply:** We used "the ground truth shapes of InterHand2.6M dataset" (line 268-269). > **Weaknesses 2-1 - 2.3.** “[...] The main argument for using more complex machinery of implicit 4D representations over explicit mesh-based representations (L108) is the ability of those to capture high-resolution geometry details. Yet, the resulting meshes seem to contain no high-resolution details. [...] > > **Weakness 2-4.** “The question thus arises if there is enough motivation for the development of the complex implicit machinery. [...]” > > **Question 1.** "I am confused how can it learn any more detailed reconstruction than the underlying low-resolution parametric model?" **Reply:** In Figure 3 in the paper, we showed the qualitative results of ours in comparison to the same implicit function-based methods only. In Figure R1 (in PDF), we additionally show the reconstruction examples of the current state-of-the-art mesh-based method [19] and ours on InterHand2.6M [26] dataset. As implicit representations learn continuous shape fields, they can naturally reconstruct shapes in an arbitrary resolution (including the extrapolated resolution beyond the observed ground truth resolution). Following the existing methods [15, 16, 18, 24, 25, 29, 39] that similarly use the ground truth MANO [34] of SMPL [21] meshes for training implicit functions, we believe that using implicit representation introduces several advantages including (1) the aforementioned arbitrary-resolution resolution reconstruction and (2) more accurate shape modeling due to the utilization of the pixel-aligned features directly corresponding to **dense** query points. In Table 1 in the paper, we also experimentally demonstrated that our method reconstructs more accurate shapes than the existing methods based on MANO parameter [44, 45] or MANO vertex regression [19] on InterHand2.6M. About (1) using "keypoint / segmentation constraints" and (2) "upsampl[ing] the mesh of MANO and fit[ting] it to ground truth scans" mentioned in the review, please refer to our answer to the next questions. > **Weakness 3-1.** “There is no comparison to a parametric model MANO fitted to sparse constraints such as keypoints / segmentation.” > > **Question 2.** *“Why is there no comparison to a simple MANO model fitted to whatever InterHand2.6m constraints are available?”* > **Reply:** We would like to kindly remind you that **we aimed to perform 4D hand reconstruction from monocular RGB sequences in this paper**. Thus, **only RGB observations are available as inputs to our method — making it non-trivial to directly perform MANO fitting to "keypoints", "segmentation", or "ground truth scans" as mentioned**. Thus, for experimental comparisons, we mainly considered the existing methods [18, 19, 44, 45] with the state-of-the-art *RGB-based* reconstruction results on InterHand2.6M [26]. Although it is possible to consider taking a multi-stage-based approach, in which we first predict the keypoints and segmentation masks from the input RGB frames and then perform optimization-based MANO fitting to those predictions, the accuracy of the fitted MANO would be directly bounded to the accuracy of the intermediate predictions. Also, considering that one of the objectives of this paper was the *"computational efficiency of 4D reconstruction"* (lines 41-42), a MANO fitting-based approach would lead to a less efficient solution due to the need for test-time optimization. In the table below, we show the experimental results of such MANO fitting using the state-of-the-art keypoint and segmentation predictor (IntagHand [19]) on InterHand2.6M [26] dataset, which yields less accurate and less efficient reconstruction results than ours. For the comparisons with the existing methods based on the feedforward MANO parameter prediction [45] or MANO vertex coordinates prediction [19] directly from images - which also intermediately utilize the *predicted* joints and segmentation priors - please refer to Table 1 in the paper. | | Shape accuracy (IoU $\uparrow$) | Shape accuracy (CD $\downarrow$) | Inference time (sec. $\downarrow$) | | --- | --- | --- | --- | | **MANO [34] fitting** | 43.7 | 6.48 | 28.4 | | **Ours** | 62.8 | 4.46 | 0.22 | --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response! I am not sure it is clear to me how the method could learn anything beyond the ground truth supervision (which are MANO shapes). The fact that you are using richer conditioning signals could help only in the case of either having appearance or scans (multi-view reconstructions) _as supervision_ ? Otherwise, the upper bound for what the model can learn will always be MANO model (since it is the model that generated the ground truth?). --- Reply to Comment 1.1.1: Comment: As you mentioned, the shape details learned by our method may be bounded to the ground truth shapes (which are MANO shapes in the case of InterHand2.6M [26]) used in model training. However, we again clarify that our main motivation for using implicit representation is its ability (1) to learn accurate pixel-aligned shapes and (2) to model shapes in an arbitrary resolution. For (1) learning accurate pixel-aligned shapes, we showed (in Table 1 in the paper and our rebuttal experiment) that our implicit representation-based method leads to more accurate shape-fitting results with respect to the ground truth MANO meshes of InterHand2.6M [26] — compared to the direct MANO parameter or MANO vertex regression methods [19, 44, 45]. This is because we utilized “the pixel-aligned features directly corresponding to dense query points” as mentioned in our rebuttal. For (2) modeling shapes in an arbitrary resolution, we followed the motivation of the existing methods [15, 16, 18, 24, 25, 29, 39] that similarly use the ground truth MANO [34] of SMPL [21] meshes for training implicit functions — which discussed its ability to adaptively control the shape resolution in test time based on the application needs (rather than one fixed resolution as in the existing mesh-based reconstruction methods [19, 44, 45]). We think that particularly learning high-frequency shape details (e.g., hand wrinkles) is a different matter, and *we did not originally claim this anywhere in the paper* — though it is achievable given more detailed GT shapes (similar to reconstructing clothing details in human bodies using implicit representation as in, e.g., [R1, R2, R3]). [R1] Saito *et al.*, PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization, ICCV 2019 [R2] Xiu *et al.*, ICON: Implicit Clothed Humans Obtained from Normals, CVPR 2022 [R3] He *et al.*, Geo-PIFu: Geometry and Pixel Aligned Implicit Functions for Single-view Human Reconstruction, NeurIPS 2022
Summary: This paper introduces an implicit spatio-temporally continuous hand representation for RGB videos. Firstly, based on LEAP [25], the occupancy function and LBS weights are pretrained as priors for query points. Then two query flow representations are introduced to model the skeleton and the shape, respectively. The query flow representations are Fourier coefficients, which can be treated as a low-pass filter for 4D representations. Experiments on InterHand2.6M and RGB2Hands show the proposed method achieves accurate and efficient 4D predictions. Strengths: - The proposed 4D implicit representation is highly efficient during both training and testing. - The Fourier coefficients are effective to get smooth and continuous temporal dynamics. Weaknesses: - [Balance] This paper proposes using Fourier coefficients to model 4D hand and emphasizes this will guarantee smooth and continuous temporal dynamics. Especially, it learns coefficients for N=6 basis functions (Line 175). I am wondering how Fourier coefficients and N balance the accuracy, the efficiency and the smoothness. For example, will N=6 lead to over smooth? why not use a larger N? is it because of efficiency? - [Optimisation] Unlike existing representations, the proposed representation is coefficients. I am wondering if this will raise any training or optimation issues. For example, will this representation require more epochs to converge or be harder to fit during training? - [$\Psi$] For pose flow, there is an off-the-shelf pose estimator $\Psi$. I'm curious about the role this pose estimator plays. Is it like a condition that directly determine the final performance or an initial values that only reduce search space (Line 209) or a refinement (Tab.3)? Would it possible to use different $\Psi$ with varied performance and see the impact on the final results during training? Also, what will happen if $\Psi$ provides bad predictions during testing? What if we do not have $\Psi$? - [Metric] It would be better to compare SOTA methods like [19,44,45] wrt other metrics like MPJPE, even if they focus on a single frame or are based on MANO. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yeah. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful feedback and finding that the proposed method is “novel and interesting” [vgcD] and shows “large improvements” [rxcm, g8GH]. We also appreciate [rxcm] for finding that the “paper is clearly written and well-motivated.” In what follows, we address the concerns of the reviewers. --- > **Weakness 1.** “[Balance] This paper proposes using Fourier coefficients to model 4D hand and emphasizes this will guarantee smooth and continuous temporal dynamics. [...] why not use a larger N? is it because of efficiency?” > **Reply:** In Table S1 in the supplementary, we provide our 4D hand reconstruction results with the varying value of $N =$ {$4, 6, 8$}. As discussed in lines 19-21 in the supplementary, our model performance is slightly decreased when $N$ is too small ($N = 4$), while it is not affected much by $N$ when $N ≥ 6$. Thus, we choose $N = 6$ in our main method to achieve a good balance between accuracy and computational efficiency. We did not consider a value beyond 8 for $N$, as it exceeds the Nyquist frequency given $T = 17$. > **Weakness 2.** “[Optimisation] Unlike existing representations, the proposed representation is coefficients. I am wondering if this will raise any training or optimization issues. For example, will this representation require more epochs to converge or be harder to fit during training?” > **Reply:** Although our model prediction is done in the coefficients space, the whole forward process is differentiable (note that Eq. (1) is defined by the linear combination of sine and cosine functions), and we have observed no particular difficulty during the model optimization. About the training convergence, in Figure R2 (in PDF), we show the validation loss graphs of our model and the variation of our model based on the direct shape-space prediction (*Fourier → InputCond* in Table 4 in the paper). There is no significant difference in the training convergence trend, and both models converge approximately after 100K steps in training. > **Weakness 3.** “[Ψ] For pose flow, there is an off-the-shelf pose estimator Ψ. I'm curious about the role this pose estimator plays. Is it like a condition that directly determine the final performance or an initial value that only reduce search space (Line 209) or a refinement (Tab.3)? Would it possible to use different Ψ with varied performance and see the impact on the final results during training? Also, what will happen if Ψ provides bad predictions during testing? What if we do not have Ψ?” > **Reply:** $\Psi$ provides the initial (noisy) pose values to reduce the search space of our model prediction, and here we demonstrate that our final shape accuracy is quite robust to the quality of the initial pose predictions. In the table below, we evaluate our 4D reconstruction results using two different models for $\Psi$ [19, R1]. While the initial pose accuracies have a considerably large gap (3.7mm in MPJPE), our final shape accuracies do not vary much, and **both settings achieve the state-of-the-art 4D reconstruction results** compared to the baseline methods in Table 1 in the paper. | $\Psi$ | Initial joint error (MPJPE $\downarrow$) | Shape accuracy (IoU $\uparrow$) | Shape accuracy (CD $\downarrow$) | | --- | --- | --- | --- | | **IntagHand [19]** | 13.3 | 62.8 | 4.46 | | **DIGIT [R1]** | 17.0 | 62.2 | 4.58 | [R1] Fan *et al.*, Learning to disambiguate strongly interacting hands via probabilistic per-pixel part segmentation. In 3DV, 2021. To additionally examine “*what will happen if $\Psi$ provides bad predictions during testing”*, we also measured our 4D reconstruction accuracy after injecting uniformly sampled noises with the amplitude [−$x$, +$x$] to every dimension of the initial pose prediction in the test time — following the exact ablation study setting in [16] (Section D in the supplementary). As shown in the table below, our model performance is quite robust to the injected noises (gently decreasing its accuracy for a higher level of noise), and again, **all settings achieve the state-of-the-art results** compared to the baselines in Table 1 in the paper. | Noise level ($x$) | Shape accuracy (IoU $\uparrow$) | Shape accuracy (CD $\downarrow$) | | --- | --- | --- | | **0mm (No noise)** | 62.8 | 4.46 | | **1mm** | 62.7 | 4.48 | | **3mm** | 62.5 | 4.50 | | **5mm** | 62.2 | 4.53 | Currently, our method does not assume a situation where $\Psi$ does not exist. We will further investigate this direction in our future work. > **Weakness 4.** “[Metric] It would be better to compare SOTA methods like [19,44,45] wrt other metrics like MPJPE, even if they focus on a single frame or are based on MANO.” > **Reply:** Thank you for your suggestion. Although our primary goal of this research was accurate shape reconstruction rather than joint estimation, we additionally evaluated our results on MPJPE in comparison to [19, 44, 45], as shown in the table below. As [19] and [45] originally use the additional *ground truth bone length* for rescaling the joint predictions, we use the mean bone length of the hands in the training set of InterHand2.6M [29] following [18] to perform fair comparisons. Our method achieves a lower (better) MPJPE than the compared methods on InterHand2.6M dataset. | $\Psi$ | Joint error (MPJPE $\downarrow$) | | --- | --- | | **ACR\*[44]** | 18.3 | | **Two-Hand-Shape-Pose [45]** | 15.9 | | **IntagHand [19]** | 13.3 | | **Ours** | 11.0 | --- Rebuttal Comment 1.1: Comment: Thanks for the authors' rebuttal. I am satisfied with the authors' responses regarding [$\Psi$] and [Metric]. However, I still do not fully understand for other two. At this time, I still keep my rating. [Balance] The authors claim their selection is a good balance. However, Table S1 only highlights the accuracy, not the computational efficiency. Would it be possible to provide computational efficiency like Tab. 2? Also, the author did not reply to me about the temporal over-smooth issue. But I find the reply from other reviewers part, I think it is OK. As three of the reviewers are concerned about the over-smooth issue, I think this part could be highlighted in the revision. [Optimisation] I do not agree with the answer to this part. Optimization is related to convergence speed and convergence accuracy. Based on Figure R2, it seems (a) is faster with a higher loss than (b). Does this imply coefficient learning lead to a slower speed and better accuracy? why? --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful comment. We additionally provide our response to your questions below. **[Balance]** In the table below, we additionally show the computational efficiency with the varying number of $N$ (along with the accuracy reported in Table S1) to further support our claim that $N=6$ achieves a good balance between accuracy and computational efficiency. We will add these additional results in the revision. | | **IoU (%)** $\uparrow$ | **CD (mm)** $\downarrow$ | **L1-Corr (mm)** $\downarrow$ | **Training Time (sec.)** $\downarrow$ | **Inference Time (sec.)** $\downarrow$ | | --- | --- | --- | --- | --- | --- | | **N=4** | 62.5 | 4.50 | 10.9 | **2.50** | **0.12** | | **N=6** | **62.8** | 4.46 | **10.8** | 2.75 | 0.15 | | **N=8** | 62.6 | **4.45** | **10.8** | 2.87 | 0.18 | Following your suggestion, we will also make sure to highlight our discussion on the “temporal over-smooth” issue in the revision. **[Optimisation]** We still think that there is no *significant* difference in the convergence speed between (a) and (b), and both reach the validation loss of $3.3 \times 10^{-3}$ approximately after 100K steps. The validation loss of (b) *slightly* decreases further after 100K steps -- leading to a better accuracy compared to (a) (Table 4 in the paper) as you mentioned. We believe that this is because our flow estimation in the frequency-domain subspace naturally guarantees smooth temporal deformations, which better match the characteristics of our target motion to reconstruct (i.e., hand motions captured in InterHand2.6M [26]) — as also discussed in our response to Question 1 of [vgcD]. If our response and the additional experimental results have addressed most of your concerns (including **[$\Phi$]** and **[Metric]** addressed in our previous response), we would greatly appreciate it if you kindly consider updating your rating. Once again, we thank you for your insightful discussions.
Summary: This paper proposes FourierHandFlow, a 4d hand pose and shape representation that inherently uses Fourier series as query flow representation. Given RGB sequence, a fixed number of Fourier series are learned to represent hand pose and shape. The authors use two types of flows to decompose pose and shape: pose flow (joint flow) and shape flow. Such decomposition makes it more efficient to reconstruct 4d hands. Experiments on Interhand2.6M and RGB2Hands datasets demonstrate its superiority over existing two hand estimation methods. Strengths: This paper is clearly written and well-motivated. The compact Fourier series representation is proved to be effective for two hand 4d reconstruction in a smoother way. The video results are impressive. The quantitative results show large improvements. Weaknesses: The main weakness is that, I am not very sure whether such representation could correctly model the surface details (or pose dependent deformation) of the hand because Fourier series based representation seem to generate over smoothed results due to its low dimension nature of shape flow. For example, at video 04:06 (frame 7101), the little finger of the left hand in “Alt. View 1” is too thin. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. How to obtain the gt 3d shape of InterHand2.6M for training? Is that simply the MANO labeling? If true, what’s the advantage of using implicit representation instead of MANO for hand shape learning? 2. L. 228, what does ^2 mean? Is that a mistake? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful feedback and finding that the proposed method is “novel and interesting” [vgcD] and shows “large improvements” [rxcm, g8GH]. We also appreciate [rxcm] for finding that the “paper is clearly written and well-motivated.” In what follows, we address the concerns of the reviewers. --- > **Weakness 1.** “I am not very sure whether such representation could correctly model the surface details (or pose dependent deformation) of the hand because Fourier series based representation seem to generate over smoothed results due to its low dimension nature of shape flow.” > **Reply:** We want to clarify that we used Fourier series **along the temporal** **axis** (Eq. (1) in the paper) to model the smooth temporal change of hand shapes for 4D reconstruction. Although such representation may cause over-smoothed shape change **along the temporal (*t*) axis**, our shape prediction **along the spatial (*x, y, z*) axes** is not bounded by the frequency-domain subspace because we did not apply Fourier series along the spatial axes unlike in the existing methods [9, 13, 16]. We will make sure to clarify this point in the revision. > **Question 1.** “How to obtain the gt 3d shape of InterHand2.6M for training? Is that simply the MANO labeling? If true, what’s the advantage of using implicit representation instead of MANO for hand shape learning?” > **Reply:** The ground truth shapes of InterHand2.6M [26] are given as meshes with MANO [34] topology. Following the 3D/4D reconstruction methods [15, 16, 18, 24, 25, 29, 39] that similarly use the ground truth meshes with a fixed topology (e.g. MANO [34] or SMPL [21] meshes) for training implicit functions, we believe that using implicit representation introduces several advantages over mesh-based representation. Most importantly, implicit representation utilizes the pixel-aligned features directly corresponding to the **dense** query points, often leading to more accurate shape reconstructions that are better aligned to the input image [9, 18]. In Table 1 in the paper, we experimentally demonstrated that our method reconstructs more accurate shapes than the existing methods based on MANO parameter [44, 45] or MANO vertex regression [19] on InterHand2.6M. We also note that, as implicit representations learn continuous shape field, they can naturally reconstruct shapes in an arbitrary resolution (including the extrapolated resolution beyond the observed ground truth resolution). In Figure R1 (in PDF), we show the qualitative reconstruction examples of the current state-of-the-art mesh-based reconstruction method [19] and ours on InterHand2.6M dataset, where ours produces more plausible hand shapes. We will add this discussion in the revision. > Question 2. *“L. 228, what does ^2 mean? Is that a mistake?”* > **Reply:** ^2 denotes *footnote 2*. We will change the footnote symbol for better clarity in the revision. Thank you for pointing this out. --- Rebuttal Comment 1.1: Title: reply Comment: I have read the rebuttal. The authors solved my concerns.
Rebuttal 1: Rebuttal: We provide the figures referred to in our author response in the PDF file below. Pdf: /pdf/e32b5c99d6cea3961645cb3b75b10dfa2fd89953.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces FOURIERHANDFLOW, which is a spatio-temporal continuous representation for the human hands. It combines a continuous 3D hand occupancy field with articulation-aware query flows represented as Fourier series along the temporal axis. These query flows are parameterized by coefficients learned from an input RGB sequence. Specifically, two types of Fourier query flows, namely pose flow and shape flow, are used to address the challenges of continuous and smooth 4D reconstruction, computational efficiency, and articulated shape modeling with correspondences. Strengths: - This paper provides a well-articulated description of the problems with existing implicit methods. It presents a clear logical flow and addresses specific challenges effectively. The results show particularly noticeable improvements for abrupt or jittery motions correction. - Although Fourier series is not a novel concept, this paper extends it to the temporal dimension. Weaknesses: - It is undeniable that this work is closely related to the Fourier Occupancy Field [9]. The authors should consider adding comparative experiments with this work. - This paper focuses on describing the acquisition of Fourier Query Flow in the method section but lacks a direct description of how the final pipeline for generating the 4D hand is constructed (e.g., how the pre-trained canonical field occupancy is utilized). A more comprehensive overview of this aspect should ideally be included in the first paragraph of the methodology section for better clarity. - The pose flow and the shape flow are two important branches proposed in this paper, and the final flow is the sum of both. It would be helpful to provide clearer illustrations that show the trajectories generated by each branch separately, as well as the combined trajectory. This would enable readers to gain a clearer understanding of the method. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - In the prediction of pose flow, I think that the essence of being able to perform articulated shape modeling is to represent the skeleton of the hand in the form of a graph, which can maintain the topological structure of the hand. Therefore, in this process, the overall skeleton of the hand is predicted frame by frame. But the subsequent prediction of Fourier coefficients is based on the trajectory of each joint, so how is the temporal smoothness of each joint constrained here? - For different input sequences, is it necessary to normalize them to the same sampling space? If so, what are the specific implementation details? - In the pose flow, the paper estimates the joint flow of the keypoints and then propagates it to arbitrary points based on LBS. Therefore, the estimation of the shape flow after arbitrary points heavily will rely on the accuracy of LBS. Does the paper have any special handling or explanation for this situation? - Recommendation: For the ODE based method, although its computational complexity is relatively high, it can maintain the property of Diffeomorphism. This aspect can be considered later. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors have partially addressed the limitations of their work, though there is space for improvement (see the section Strengths And Weaknesses). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful feedback and finding that the proposed method is “novel and interesting” [vgcD] and shows “large improvements” [rxcm, g8GH]. We also appreciate [rxcm] for finding that the “paper is clearly written and well-motivated.” In what follows, we address the concerns of the reviewers. --- > **Weakness 1.** “It is undeniable that this work is closely related to the Fourier Occupancy Field [9]. The authors should consider adding comparative experiments with this work.” > **Reply:** Fourier Occpancy Field (FOF) [9] uses Fourier Series along one of the spatial axes (i.e., z-axis) to enable efficient 3D human reconstruction, while our method uses Fourier Series along the temporal axis (1) to regularize high-frequency (e.g., jittery) temporal shape change and (2) to enable efficient 4D hand reconstruction. Along with this difference, we believe that another important difference between FOF and our method lies in the characteristics of the shape category that each aims to reconstruct. FOF aims to reconstruct clothed humans, which have complex shape variations (e.g., cloth wrinkles) but with fewer self-occlusions caused by the underlying shape articulations. In contrast, our method aims to reconstruct human hands, which have more complex articulations leading to more severe self-occlusions. Thus, most of the existing hand implicit functions, e.g., [6, 16, 18], use articulated implicit representation to directly model pose-dependent deformations with respect to the learned canonical hand shape to incorporate strong pose prior. Similarly, one of our main contributions was also to propose articulation-aware query flows (i.e., pose and shape flows) to directly model pose-dependent 4D hand deformations. Since FOF is a non-articulated implicit function, we thought that the existing state-of-the-art methods specifically designed for the same hand reconstruction task would be more challenging baselines. In the table below, we show the experimental comparisons with FOF, where FOF yields similar results to our non-articulated implicit function baseline (Occupancy Flow [29]) in Table 1 in the paper. We will add this discussion and the experimental results in the revision. | | Shape accuracy (IoU $\uparrow$) | Shape accuracy (CD $\downarrow$) | | --- | --- | --- | | FOF [9] | 30.4 | 23.88 | | Ours | 62.8 | 4.46 | >**Weakness 2.** “A more comprehensive overview of this aspect should ideally be included in the first paragraph of the methodology section for better clarity.” > > **Weakness 3.** “It would be helpful to provide clearer illustrations that show the trajectories generated by each branch separately, as well as the combined trajectory. This would enable readers to gain a clearer understanding of the method.” > **Reply:** Thank you for your suggestion. We will make sure to (1) add the overview of our overall pipeline and (2) show the trajectories generated by each type of flow in the revision. >**Question 1.** “In the prediction of pose flow, I think that the essence of being able to perform articulated shape modeling is to represent the skeleton of the hand in the form of a graph, which can maintain the topological structure of the hand. [...] But the subsequent prediction of Fourier coefficients is based on the trajectory of each joint, so how is the temporal smoothness of each joint constrained here?” > **Reply:** The temporal smoothness of each joint is preserved from our trajectory prediction in the *frequency-domain subspace* (lines 49-50). For maintaining the topological structure in our temporal reconstruction, we feed the joint features extracted using a graph convolutional network (GCN) as inputs to our Fourier coefficients estimation module (lines 202-203). We observe that such structure-aware input feature conditioning is sufficient to be able to preserve the hand topological structure in our temporal reconstruction as shown in the supplementary video. >**Question 2.** “For different input sequences, is it necessary to normalize them to the same sampling space? If so, what are the specific implementation details?” > **Reply:** Along the spatial dimension, we normalized the 3D coordinate space of each input sequence by aligning the predicted hand root joint of the first frame to the origin point. Along the temporal dimension, we did not apply normalization. We will clarify this in the revision. >**Question 3.** “[...] the estimation of the shape flow after arbitrary points heavily will rely on the accuracy of LBS. Does the paper have any special handling or explanation for this situation?” > **Reply:** The accuracy of the estimated shapes after LBS (S.3.1 in the supplementary paper) is dependent on two factors: (1) the accuracy of the pre-trained skinning weights $\mathbf{w}^{\mathbf{p}}$ and (2) the accuracy of our joint flow estimation, which determines the rigid bone transformations {$\mathbf{T}_b$} for $b=1, \textit{...}, B$. To examine factor (1), we calculated the accuracy of the pre-trained $\mathbf{w}^{\mathbf{p}}$ by computing the shape IoU after applying LBS using the ground truth rigid bone transformations. As the resulting shape IoU was quite high (98.62% on InterHand2.6M [26] test dataset), our shape accuracy after LBS would mainly depend on the remaining factor (2), which is the accuracy of our joint flow estimation. Thus, we performed experiments to examine the robustness of our final shape accuracy with varying quality of the initial joint estimation and showed that our method is quite robust to it. We kindly refer you to our answer to Weakness 3 of the reviewer [Uqr4] for the experimental results. >**Question 4.** “Recommendation: For the ODE based method, although its computational complexity is relatively high, it can maintain the property of Diffeomorphism. This aspect can be considered later.” > **Reply:** Thank you very much for your valuable comment. We will consider this aspect in our future work.
null
null
null
null
null
null
Inference-Time Intervention: Eliciting Truthful Answers from a Language Model
Accept (spotlight)
Summary: ## Summary The authors have introduced a new way to ensure LM generations provide truthful answers. They modify activations using a set of learned directions across top-K attention heads. The new method, "Inference Time Intervention," entails identifying a few attention heads with high classifier accuracy in linear probing. During the inference procedure, the technique changes the activations in directions related to the truth. It is a form of activation editing. This paper focuses on improving the accuracy of LLMs - specifically, addressing cases where the model knows the correct answer but produces an incorrect one. The authors use the difference between generation and probe accuracy to operationalize what it means for the network to "know." Saunders et al. 2021 have also emphasized this point in their previous works. ### Dataset & Evaluation In the paper, the dataset used is called the Truthful QA dataset. However, some of the questions in the dataset may cause the model to provide incorrect answers. The dataset has two types of answers: "truthful" (meaning not false) and "informative" (meaning answering the question). The metric "truthful informative" measures the percentage of truthful and informative answers to evaluate the model's accuracy. Strengths: - The paper is well-motivated and addresses a significant problem in alignment. The technique presented is simple, non-invasive, and data-efficient. Weaknesses: - See Questions below. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - How did you decide on using 40 samples to construct the probing dataset? The train-to-validation ratio was 4:1. Were 32 examples used for training the classifier model and 8 for validation? It seems too low to qualify for a probe. Or was the probe classifier trained on 40 x n examples where n is the size of the Truthful QA dataset? The latter approach seems more reasonable technically. However, this joint training for all samples ignores the difference in discovered heads for different Truthfulness types, e.g., logical falsehoods, conspiracies, common points of confusion, etc. The head representing logical falsehood might differ significantly from the head representing conspiracy. - Based on the results, it seems the Mass Mean Shift method for identifying the truthfulness direction gives the best results. Is there any similarities between the directions identified for different layers and heads? - I am worried about using CE and KL to measure the model's divergence from the original. It suggests that this method ensures accurate results without deviating too much from the initial distribution. However, observing a decrease in CE and KL values is insufficient. We must also evaluate how this approach impacts the model's performance on other tasks. In particular, we need to analyze this intervention's adverse effects on the model's performance on downstream tasks. - My previous point is of more significant concern because the evaluation was done automatically using GPT-judge instead of by humans. - I need help understanding Figure 2(b). Based on its construction, it functions similarly to PCA. However, I need clarification as to why the major axis of the ellipses is all oriented in the same direction. I had anticipated them being perpendicular, like in PCA. - How is the difference between generation and probe accuracy used in the evaluation? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors have adequately addressed the limitations and risks of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive and helpful feedback. ***Addressing questions*** **Only 40 samples for probing?** Questions in TruthfulQA have an average of $7.2$ answers. Therefore, the $40$ samples provide roughly $288$ QA pairs to train and evaluate the probe. As shown in Figure 6(A), the performance improvement brought by using more QA pairs plateaus early. Figuring out the direction of different subcategories, how they differ, and how they could compose are interesting research questions! We leave it to future research. **Similarities of truthful directions across attention heads** Directions from different attention head output spaces lie in different spaces mapped to each other with complex nonlinear calculations. It's difficult to calculate the similarities between vectors across spaces. **Evaluation on other benchmarks** KL divergence results suggest that the model deviates little from the original language distribution. Pretraining loss (CE) suggests the model is still good for its original purpose, predicting the next-token. The conventional wisdom is that CE is well-correlated with downstream task performance [1, 2]. Nevertheless, what the reviewer raised is an important and solid concern that model-editing method publications should all address. The positive results in our generalization experiment onto TriviaQA and Natural Questions provide initial evidence that our methods don't degrade the model's capability, but extensive experiments like MMLU are more convincing. [1] Liu, Hong, et al. "Same pre-training loss, better downstream: Implicit bias matters for language models." International Conference on Machine Learning. PMLR, 2023. [2] http://mitchgordon.me/ml/2020/06/30/pretraining-linearity.html **Using GPT as judge** If I understand correctly, the reviewer is worried by the possibility that the intervention actually takes advantage of a spurious correlation of GPT-judge, so the True*Info scores should not be trusted. Please correct me if I'm wrong! To rule this out, the multi-choice accuracy compares the conditional probabilities of several candidate answers and is free of GPT-judge or such worries. Moreover, we printed out all post-and-pre-intervention QA pairs to be examined in Appendix A. We manually checked a subset of these and found that the truthfulness is improved. **Explain Figure 2(B)** The setup is similar to PCA but relies on a different way to select the base directions. The first truthful direction is normal to the hyperplane that best separates truthful and false activations; the second truthful direction is found with the same process but in a subspace orthogonal to the first truthful direction. By such a setup, it is possible that the major axes of the ellipses are parallel. **Difference between generation and probe accuracy** Before the intervention, the gap between generation truthfulness and the probe on the best attention head is $83.3\%-31.1\%=52.2\%$ and averages to around $40\%$; after the intervention, the largest gap is $83.3\%-49.1\%=34.2\%$, and the average gap is $22\%$. --- Rebuttal Comment 1.1: Title: Reponse Comment: I acknowledge the rebuttal by the authors. I have raised my score from 6->7
Summary: The authors study how to steer the text generation from different LLMs to be more truthful. They do so by finding directions in feature space (for each attention head), that correspond to truthfulness, and intervening at inference time by adding these directions to the activations of the relevant attention heads. They thoroughly characterise the method. (I gave the score of 7, but I was torn between 7 and 8) EDIT: Updated to 8 after rebuttals Strengths: This is a great paper, reading it sparked joy. It's very, very well written; clear and easy to follow. The research question is important and interesting. The experiments are quite extensive, rigorous, and seem to be reported even-handedly (i.e. I don't get the impression that the authors try to display their method in a favourable light, see e.g. the generalisation results. Weaknesses: Major concerns (I only have one): - 1. Comparison to related work and novelty: Lots of relevant work is cited, and, AFAICT, the main relevant papers are mentioned. But the authors should do a much better job at explaining how exactly their method is different from key related work, like Subramani, Hernandez, and Burns. In the author's defense, some of the relevant work was published shortly before the NeurIPS deadline and would be considered concurrent. (I somewhat know the literature and think that there is sufficient novelty in this paper; both from a methods point-of-view, and also from the application of an activation-editing method to the field of truthfulness; BUT, it should be explained better) Medium-sized concerns: - 2. Why did you not try to fit one single probe to all activation (maybe with high regularisation to get a sparse result). Or, at least, choose which heads you accept with an algorithm that takes into account feature redundancy? This should probably be an ablation, especially as you claim somewhere that a key difference to previous work is that you only modify some activations. So you should study whether this actually makes a difference. - 3. I think that for a more comprehensive evaluation of generalisation, it would be good to include a multiple-choice dataset where you don't have to generate the false answers yourself first (e.g. like MMLU) - 4. I feel like the abstract is a bit too enthusiastic, in that it only cites the most impressive results. I appreciate that that's probably a matter of taste, though. - 5. Several figures and tables: I think I know what the difference between "True" and "MC acc" is, but it's not very well explained. Also, is True evaluated by GPTJudge or humans? - 6. In Table, 1, I would be interested in the "linear probing" baseline (which you can, of course, only use or multiple-choice). In the intro, you say "we observe a large 40% difference between probe accuracy and generation accuracy" - 7. There are known issues with GPT-Judge, e.g. that it rates answers that express more uncertainty (even in unhelpful ways) as more truthful. More human evaluation would be better. Some small suggestions: * Figure 1: wait, did scholars in the middle ages not think the world was flat? I didn't know that. Maybe take an example that is easier to understand, like "breaking a mirror brings 7 years bad luck" * Section 3.1: Q is often shorthand for the query matrix, so I'd recommend using a different variable. * For Figure 2B: I had looked at this figure before reading the text and interpreted the two directions as the coefficients of the probe with the highest magnitude. Clarify in the caption, or refer to text. * Section 3.1: You should more clearly explain that TruthfulQA is a dataset that consists of questions that many humans commonly answer incorrectly. * You should explain how you sample from Llama (temp, topk, ...) * you write "Intervening in this direction is equivalent to doing a gradient descent on the head activation to maximize its probability of being predicted as truthful" -> this confused me, as it's only equivalent to doing ONE gradient step. * Figure 4: It would be nice to also see the baseline (alpha = 0 and/or K = 0). * Figure 4: I'd explain in the FIgure caption or Figure that the second column measures truthfulness and the third column measures how invasive the intervention is. * I don't understand the following sentence: "We combine the answers from two hold-out sets for evaluation so no test samples are used in direction finding." * Section 4.1: You should explain that the metric true*informative only applies to the generation part. This is obvious, once you think about it, but it would have helped me understand the paper faster. * Table 3: Say that this is on Llama (without IFT) again. * line 275: text fragment Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: none Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: yes, limitations are highlighted well Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive and helpful feedback. We will edit accordingly to perfect the reading experience of the paper. ***Addressing major concerns*** **More detailed related work** CCS (Burns et al. 2022) is already introduced in Introduction, Related Work as well as the experiment section. But we will add a more detailed introduction to REMEDI and Subramani et al. in the Related Work. REMEDI (Hernandez et al.) learns a parameterized function to intervene the representation space of the language model to edit the knowledge it has, e.g., "Anita is an attorney". Subramani et al. discover that, by manipulating the representation space of GPT-2, we can force it to generate a specific sentence. And the average of such "steering vectors" can achieve sentiment transfer on the Yelp Sentiment dataset. ***Medium-sized concerns*** **A single sparse probe** We didn't train a huge probe because we think of selecting attention heads as a natural way of sparsifying intervention. But a probe with strong regularization could serve as an interesting baseline. To compare with intervening everywhere, we controlled the number of heads intervened (K) in Figure 4. If we extrapolate the trend to $K=1024$ (all heads), it will be an extremely strong intervention, where the low helpfulness might be undesired. **Linear probe baseline** In line 153, we mentioned the probe validation set accuracy is $83.3\%$ for linear probes with a 4:1 split on multi-choice QA pairs. It's a good idea to mention the highest probe accuracy trained with $5\%$ of TruthfulQA in Table 1. **Human evaluation** Human evaluation is the ultimate evaluation of truthfulness and helpfulness, but it's also noisy and suffers from a lack of reproducibility. Different authors tried to annotate generated answers, but we found people disagree on what "truthfulness" and "helpfulness" mean for each question. Therefore, we kept only reproducible scores and printed out all QA pairs in Appendix A. --- Rebuttal Comment 1.1: Title: Response Comment: Thanks. I think this response mostly fails to address my concerns, so I won't raise my score (but hey, it's a 7 already!). Re related work, I already acknowledged that you cite most related work, but you fail to explain how exactly your work is different. This is still the case in your response here. Re a single probe: intervening on many heads (each individually fitted) is not the same as fitting one probe to all heads simultaneously; or using any other method that accounts for correlation between the features. And I still think another MC dataset would help the paper. --- Reply to Comment 1.1.1: Title: Response Comment: Re related work, here I spell out the difference in detail. (1) To CCS, the biggest delta is the intervention proposed, whereas CCS focuses on understanding the direction of truth in hidden spaces; (2) To REMEDI, the goal is different, we hope to improve overall truthfulness while REMEDI aims at editing specific knowledge like "Anita is an attorney"; (3) To Subramani et al., the task is different, they focuses on forcing generating one specific sentence and style transfer, while we focuses on controlled open-ended generation. Re large linear probe, I guess what you meant by correlation is that features from different heads and layers will compete against each other via a strong regularization term. If we force the 0-norm of the big probe weight to be $128*48=6144$, it will change exactly the same number of activations as in ITI but at different locations. That's indeed a good alternative method to try. Hope this answers your questions. We appreciate your insightful review! We will be available during the discussion period if any further questions arise.
Summary: This paper proposed Inference time intervention, which can be used to enhance the trustfulness of LLMs. The method first uses supervised learning to identify latent vectors for factual outputs and then shift activations based on these vectors. They repeat the same intervention aggressively. The proposed method is computationally inexpensive. The experiments on TruthfulQA benchmark demonstrate ITI can boost model performance to a large extent. The authors also verify the generalizability of ITI on other benchmarks and show better performance than baseline methods. Strengths: The paper proposes a way to improve the truthfulness of LLMs and the method shown to be effective and data efficient. The analysis in the paper is comprehensive and supports the claim: starting from understanding whether a network knows the true answer to proposing ITI to improve the truthfulness. Each step is well explained and transferred smoothly. Weaknesses: NA Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How does ITI compress to weight editing since both aim for minimal invasion? Even though in the related work, it said the latter one could reduce general robustness but not sure if that will happen in this application. - In Fig.2B, it showed there could be more than 1 truthful direction. In ITI, do you consider shifting along different directions or do you only do the first main direction? Will adding more directions help the models more? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive and helpful feedback. For the raised two questions: **Comparison between ITI and weight editing** ITI is an activation editing method that does not change model weights. It enjoys the benefit that the intervention strength could be tuned by a hyperparameter $\alpha$, which gives practitioners more flexibility in deploying this method. Qualitatively, Appendix A shows that the post-intervention remains language fluency; quantitatively, the shift from pre-intervention language distribution is measured by the KL divergence (KL) and pretraining loss (CE). In fact, ITI with a selected $\alpha$ is equivalent to doing sparse editing to the bias term of the output projection MLP. I successfully edited a LLaMA model offline within the HuggingFace framework, which I can release after de-anonymization. I talked with it, and it still speaks fluent, sense-making English. We will welcome everyone to chat with it. **Alternative Truthful Direction** We tried alternative directions like the direction pointing from the mean of false activations to the mean of truthful activations (mass mean shift). Table 3 shows that the “mass mean shift” outperforms the first probe direction. --- Rebuttal Comment 1.1: Comment: I have read the authors' rebuttal and keep my original score.
Summary: This paper studies the truthfulness for LLMs through the internal representations of models, which is an important and challenging research direction. As previous works have demonstrated that LLMs can contain truthful information internally despite giving an incorrect output, this paper proposes the Inference-Time Intervention (ITI) technique that shifts the activations of several selected attention heads when generating tokens. The authors run experiments mainly on the TruthfulQA dataset to demonstrate the effectiveness of ITI, and analyze the generalization ability on two other datasets. Comparisons with other techniques, including few-shot prompting, finetuning, and CCS, have been studied. Strengths: Overall, this paper is studying an important problem with contemporary LLMs. Several strengths I particularly admire are: - The methodology of this paper is appealing. Rather than merely finding an internal representation like CCS, this paper goes further and keeps the token generation process unchanged. This allows the ITI technique to be integrated with existing standard pipelines, and is less invasive. - The intuition of ITI is straightforward. In fact, shifting the activations of attention output linearly is at least intuitive, and the fact that computation cost will not be influenced is quite preferred. Weaknesses: Notwithstanding its strengths, this paper presents several weaknesses that prevent its current acceptance: - The first contribution of the paper, the exploration of misalignment between the outputs and the internal information, has been previously addressed in studies such as the CCS paper. The analysis in Section 3, while valuable, does not offer substantial novel insights. These points should not be viewed as the principal contributions of the paper and should be rephrased accordingly. - A major limitation of the paper is its narrow focus on a limited number of datasets, which doesn't convincingly support the effectiveness of ITI. Most experiments are conducted on the TruthfulQA dataset, with a minimal generalization to two additional datasets. This leaves it uncertain whether ITI is universally applicable or specific to TruthfulQA. More tests on a variety of datasets would strengthen the paper. - When focusing solely on the TruthfulQA dataset, ITI's improvement appears marginal. For instance, the True*Info accuracy only improves by 2% with few-show prompting, while few-show prompting itself provides a more significant 13% improvement. These findings suggest that the benefits offered by ITI could largely be achieved through few-show prompting alone. Therefore, the paper's claim that ITI is an important technique to consider needs more compelling support. - Finally, the method for determining the hyper-parameter $\alpha$ remains ambiguous. The crucial question is whether $\alpha$ is universally applicable or requires case-by-case optimization. Considering the slight improvement provided by ITI, it is clear that selecting an appropriate $\alpha$ is vital, as also suggested by Fig 4. However, the paper fails to explain how this selection should be done. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: The following questions would not change my rating, but I will highly appreciate if the authors can fix them: - In line 46, the authors mention the prove accuracy of the model. However, it seems to me that this is not a property of the LLMs, but is highly relevant to the classifier you use. I'd recommend the author not to introduce this as a "concept" (since it only appear once in line 187), but just describe it. - The value of CE and KL is uninformative. From the table, I cannot tell whetehr a CE value is high or not, and I am not sure whether they are important. I recommend the authors to clarify the meaning of their value, and explicitly talk about what we can learn from the variation. - It seems that the way this paper use CCSis inconsistent with the original CCSpaper. First, I don't think using only *one* sample is enough to find a good direction. Second, it is not designed to shift the activation of attention heads, but rather the output of each *layer*. I am not sure whether it is appropriate to use CCS in this way. - Line 275, typo. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their useful and constructive feedback. We have clarified several elements of the paper below and will incorporate them in our updated version. ***Addressing Weakness*** **Figure 2 not a contribution** Though (orthogonal) probe techniques have been discussed in the literature, applying it to LLaMA and TruthfulQA hasn't been done before. Thus we need to introduce this procedure to lay the ground for introducing ITI. We will edit the prose and be precise in claiming contributions. **Only on TruthfulQA dataset** Thanks for bringing up this concern. Testing in additional settings would certainly strengthen the result. However, it's worth noting that the experiments in the paper do cover a variety of contexts, for two reasons: + TruthfulQA is a diversified dataset covering 38 subcategories. They range from law, stereotypes, to conspiracy theories, etc. We did not treat subcategories differently but, as Figure 5 shows, ITI improves most of them. This is nontrivial in itself. + The zero-shot generalization experiment on TriviaQA and Natural Questions is the most rigorous test for ITI since we apply the intervention learned on TruthfulQA on new datasets without any modification. It's a true zero-shot setting and even a small improvement in such a setting is positive. **Marginal improvements** We hope to propose a technique that could be added onto real ChatBot products like ChatGPT or Claude. Few-shot prompting takes up precious context length and significantly shifts original language distribution. Our focus in Table 1 through Table 3 is the delta between methods with and without ITI, e.g., from $30.5\%$ to $42.3\%$ for LLaMA baseline. **Selecting $\alpha$** In the purely research-oriented case of this paper, we do train, validation, and test split and choose $\alpha$ to optimize the True*Info metric on the validation set. Genealization experiments on TriviaQA and Natural Questions show that a good $\alpha$ tuned on TruthfulQA has the potential to work as well on other domains. That is to say: $\alpha$ is more dependent on the specific model rather than requiring careful tuning for specific datasets. However, the real-life dilemma is that we are unsure what practitioners are optimizing for. The $\alpha$ should be selected per need by the user via trial and error: if users are extremely cautious about untruthful replies, $\alpha$ should be tuned up; otherwise, if helpfulness is also a requirement. We will clarify this point in the paper. ***Addressing Questions*** **Writing around line 46** This is a valid point. We will change it into "We look into this through the difference between..." **How to interpret CE and KL?** We will better contextualize the value, e.g., equivalent to losing how many parameters or how much less training data. **CCS usage** In Table 3, all directions, e.g., CCS, and mass mean shift, are found using half of TruthfulQA. What do you mean by "one sample"? Thanks for pointing out the confusion with the CCS baseline. The issue here is that ITI focuses on attention heads, while CCS focuses on MLP activations. As a result, we compare with a CCS analog for attention heads. We will clarify this. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I appreciate the supplementary experiment. It addresses my major concern about whether ITI can be trained in advance and generalize to various settings. I will raise my score to 6. > "We will edit the prose and be precise in claiming contributions." Can you please explicitly let me know what you will claim? --- I believe your work will be benefited from the following improvements: - Some theoretical explanation or at least some intuition on why ITI can improve truthfulness. The reason why I previously requested more experiments is that we do not know the *boundary* of ITI, i.e., under what setting its effectiveness vanishes. To systematically address this, some theoretical analysis or ablation study can be helpful for understanding the value and limitations of ITI. - Better analysis of the incorporation with existing techniques. It has been shown in TruthfulQA that when equipped with FSL, the improvement of ITI becomes marginal (but still positive, I agree). Is this true in MMLU / TriviaQA as well? More experiments and analysis on this will be insightful. --- Rebuttal Comment 1.2: Title: Response from authors Comment: Thank you for acknowledging the new generalization results. **Phrasing Contribution** I changed the paragraph in line 63 into: > This work makes two main contributions. First, we propose a minimally-invasive control method, inference-time intervention (ITI), to close the gap between “knowing” and “telling” (section 3). ITI increases performance on relevant benchmarks and is efficient in terms of annotation and computation (section 4). Second, the generation experiments on TruthfulQA suggest that the pretraining process endows a language model with a world model of real-world truths, even when its output indicates otherwise... Note that the second contribution is not about Figure 2 but rather a scientific understanding drawn from the results of aforementioned engineering techniques (first contribution). I also changed the paragraph in line 135 into: > Following works that finds interpretable directions within activation spaces of neural networks, we investigate whether there are vectors in the activation space of transformer layers that correspond to ``truthfulness'' by applying existing techniques: probing and orthogonal probing. **New Suggestions** We appreciatively agree that these are important improvements that could be made. I could guess the _boundary_ of ITI is the accuracy of the internal world model endowed by pretraining, which is unknown and could only be approached via ever-advancing inference-time techniques. I will also look into the relationship and comparison of ITI and existing techniques to better establish it as a mature ML technique.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Learning to Modulate pre-trained Models in RL
Accept (poster)
Summary: Large-scale pretraining on a diverse dataset followed by finetuning on smaller datasets from downstream tasks has been wildly successful in domains such as computer vision and NLP. The closest analogue to this paradigm in the context of RL is arguably multi-task pretraining followed by finetuning on one or more unseen tasks. This paper investigates the efficacy of finetuning approaches popularized by supervised learning (CV/NLP) on RL problems cast as sequence modeling with Decision Transformers (DT). The authors construct a pretraining dataset that consists of 50 state-based tasks from Meta-World and DMControl (40 and 10 tasks, respectively), and evaluate finetuning methods on held-out tasks from each domain (10 and 6 tasks, respectively) in both single-task finetuning, multi-task finetuning, and continual learning settings. The authors find that their proposed finetuning method, L2M, which combines L2P and LoRA, consistently obtains good performance on unseen tasks after finetuning, while also retaining good performance on the pretraining tasks. Strengths: The problem is interesting, paper is well written and easy to follow, the method is well motivated, and experiments appear sound. Sufficient discussion of related work. While many existing papers have considered multi-task pretraining and finetuning in RL, I appreciate that the authors take the time to thoroughly investigate trade-offs between different finetuning methods. Further, new finetuning strategies such as LoRA have become very popular in NLP, and this paper confirms that it (along with other modifications necessary to make it work for multi-task DTs as proposed by the authors) can also work well for DTs. Weaknesses: - **Lack of clarity on experimental setup.** When going through the paper, I found it difficult to fully grasp what the experimental setup looks like and its potential assumptions / pitfalls / failure modes without repeatedly checking the appendix and/or reading between the lines. For example, it is not stated explicitly that finetuning is done strictly on offline replay data which is also collected by single-task SAC agents as in the pretraining dataset. I had to find this information in Appendix D. Likewise, it is not stated explicitly which tasks are included in the pretraining and finetuning datasets, I had to find this information in Appendix A. For the former, it is not really a problem that finetuning requires an offline dataset for the target task, but not making it clear is deceiving. For the latter, the authors do mention that the DMControl tasks include multiple embodiments but do not provide further details in the main paper. I find this problematic since different splits and/or sets of tasks would lead to very different finetuning performances (e.g. task difficulty and degree of overlap). - **Lack of discussion on limitations.** Continuing along the lines of the above, I also find that the paper generally lacks discussion of limitations. Given that the work is very data-driven and domain gaps generally are larger in RL than in NLP, it is important to clearly state assumptions / pitfalls / failure modes related to data collection and experimental setup. Ideally, these limitations (or properties, if you will) would be backed by data that shows, e.g., that finetuning is highly dependent on task similarity. The authors list this as future work but adding such an experiment would make the current submission more complete. For example, the authors could pretrain on Meta-World and finetuning on DMControl, and vice versa, and compare to the performance when including same-domain tasks in the pretraining dataset. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I would like the authors to address the comments I listed in "weaknesses". Additionally, two clarification questions: - It is stated in Appendix A that the task *reacher-hard* is included in both the pretraining and finetuning datasets. I assume that this is written in error, but would like the authors to please list the correct task splits. - The authors also state in Appendix A that the action spaces considered in DMControl range from 1 (cartpole) to 21 (humanoid) dimensions. However, I do not see any experiments on humanoid in the paper. Can the authors please clarify if they consider humanoid tasks or not? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I would like to see more discussion on limitations. See my previous comments for constructive feedback. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive assessment of our paper and your feedback! **Lack of clarity on the experimental setup:** Following your feedback, we revised our paper to improve clarity. In particular, we changed the following: * In Line 181 (Experiments), we now explicitly point out that the fine-tuning is done strictly on offline replay data, similar to our pre-training setup. We agree that this information should have been more prominent in the main text, rather than being mentioned in the Appendix only. * We added a Table to Appendix A, in which we explicitly list the state and action spaces for all pre-training and fine-tuning tasks considered in this work. While the Meta-World benchmark contains a single robot morphology (same state/action spaces across tasks), the morphologies in DMControl vary across tasks (different state/action spaces). In addition, we refer to this Table in Section 2 and give illustrative examples for Cheetah and Walker in the main text. * In case of acceptance, we will use the additional page, to include further details in the main text, that are currently relegated to the Appendix due to space constraints. **Lack of discussion on limitations:** * We agree with the reviewer, that pre-training on one domain, and fine-tuning on another is an interesting experiment. Therefore, we pre-trained a DT (with action discretization, unified state space and autoregressive action prediction) on Meta-World (MT40) only and then fine-tuned it to DMC6 and CW10. We find that the single-domain model (MT40) performs worse than our multi-domain model (MT40+DMC10), both on CW10 and on DMC6 (see Figure 3 in the attached pdf). These results indicate that multi-domain pre-training has a positive effect on the fine-tuning performance. We will add these two experiments, including a more detailed discussion of the setup/results/limitations, to our manuscript. * Another limitation of multi-domain pre-training in RL, is that it requires discretization and autoregressive action prediction to handle varying action spaces. Therefore, we pre-trained a non-discretized DT (trained via MSE loss) on MT40 only and then fine-tune on CW10. This results in considerably higher performance scores, but comes at the cost of the loss of generality. This single-domain model can only handle the particular state/action space it was trained on, and thus, fine-tuning it to tasks with new state/action spaces is not possible. We added a discussion on these limitations in our paper. Regarding your **questions**: * Thank you for spotting this, and sorry for the confusion. You are right, this was an error, and we already corrected it. Reacher-hard is in the pre-training dataset, and reacher-easy is in the fine-tuning dataset. We selected our 6 fine-tuning tasks for DMControl in line with prior work (Hafner et al., 2019; Yarats et al., 2020). * As stated in lines 705-706, the action spaces in DMControl range from 1 (cartpole) to 21 (humanoid) across all tasks. However, in the 16 DMControl tasks we select (see Appendix A) the action spaces vary between 1 (cartpole, pendulum) and 6 (cheetah, walker). Humanoid is not among these 16 tasks. For clarification, we added a table in Appendix A that lists the action spaces (and original state spaces) for all environments we consider in our experiments. Thank you again for your actionable suggestions. We added the clarifications and experiments, including a discussion of respective limitations, to our manuscript. If there are any further questions, we would be happy to discuss them! **References**: * Hafner, Danijar, et al. "Learning latent dynamics for planning from pixels." International conference on machine learning. PMLR, 2019. * Yarats, Denis, Ilya Kostrikov, and Rob Fergus. "Image augmentation is all you need: Regularizing deep reinforcement learning from pixels." International conference on learning representations. 2020. --- Rebuttal Comment 1.1: Title: Thank you Comment: Thank you for the clarifications and additional experiments. I believe that this paper will be useful to the NeurIPS community, and incorporating all of the feedback you have received during this rebuttal (from fellow reviewers and I) into a future revision will further strengthen it. I have raised my score accordingly.
Summary: This paper studies fine-tuning and continual learning of pre-trained decision transformers in RL. Extensive experiments are conducted to analyze naive fine-tuning, parameter-efficient fine-tuning, and prompt tuning methods on both Meta-world and DMC domains. This paper presents a new method L2M, which combines well-established LoRA and L2P methods and demonstrates the superiority of L2M on Continual-World and DMC benchmarks. Strengths: 1. The proposed method is well-motivated and carefully designed to enable a general agent on multiple domains and tasks. 2. Extensive experiments and ablation study on modulating pre-trained DT 3. Strong performance on the continual learning benchmark Weaknesses: 1. The most significant weakness is the relatively poor presentation of the methodology and experiments, mainly due to the absence of many details. See questions below. 2. Although a unified state space is manually designed for MDDT in this paper, if I understand correctly, it is hard to extend it for new domains with a distinct state space, such as RLBench. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: On the methodology: 1. According to Eq. (2) and (3), it seems that each step of DT separately determines a distinct choice of the modulator. Does this mean that we have different weights in Eq. (1) for each token in the sequence? If so, can it break the advantage of training in parallel for transformers? 2. In Eq. (3), how do we update n(k)? What do we mean by selection count? Do we add n(k) by one, once we encounter a trajectory selecting k? 3. In Line 132, how the learnable keys are updated? I cannot find this additional term in the main text or appendix. 4. In Line 359, the authors state that future work includes combining and sharing modulators across tasks. However, if I understand correctly, we have already shared the modulation pool across tasks in the continual learning setting. On the experiments: 5. In Line 208, the authors claim that they also experimented with PrompDT and VIMA, but I cannot find any experimental details or results. 6. In Section 3.2, where are the details about L2M-oracle? What kind of information (e.g., textual task specification?) is provided to L2M, and how is this information provided in the model? I cannot find implementation details either. 7. In Figure 6, it seems there is a line of straightforward baselines, i.e., separately training a new LoRA or adapter for each new task. Why are these baselines not evaluated? 8. In Figure 7, why do PEFT methods hurt performance on the pretrained tasks? If I understand correctly, we freeze all the parameters of the pre-trained model and only fine-tune the modulators. Minor suggestions on presentation: 9. Since L2M combines L2P and LoRA, the authors should present L2P in the Background section, provide intuitions on how L2P mitigates forgetting, and highlight the difference between L2P and L2M (one uses prompt tuning and one uses LoRA). 10. In Figure 4, the captions of subfigures should be CW10 and DMC6, respectively. Overall, I appreciate the effort made by the authors to conduct extensive experiments and carefully design the method. I recommend the author continuously improve this paper and make it impactful. If the authors solve my questions properly and plan to revise their presentation, I will be happy to increase my rating. ------ Update: The authors have responded with detailed clarification in their rebuttal and most of my concerns are addressed. Thus I increase my rating from 4 to 5. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: This work has discussed its limitations, future work, and social impacts. I recommend including weakness 2 mentioned above in the limitation part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your excellent feedback, it helped us to considerably improve our paper! We conducted additional experiments (see attached PDF). In the revised manuscript, we incorporated all your feedback and suggestions. **Presentation of methodology:** 1. **Training in parallel:** At training time, the modulation matrices are selected for the entire sequence, not on a per-step basis. Thus, our method does not break the advantage of training Transformers in parallel. Thanks for pointing this out, we will highlight this in the methods section of the paper. 2. **Selection count n(k):** The selection count n(k) refers to the keys that map to the modulators in the modulation pool. This means that during training time, we maintain a count of how often a given key (and respective modulator) has been selected up until the current task. Once a key is selected, we increase its count n(k) by 1. To discourage L2M from always selecting the same modulators, we use the inverse of the selection count in Equation 3. 3. **Learnable keys:** For updating the learnable keys, we employ the same update strategy as L2P. This means, we use a surrogate loss term to pull the selected keys closer to the corresponding query features by maximizing their cosine similarity. This loss term is added to the regular cross-entropy objective for action prediction used by the Decision Transformer. We now include the exact equation in the paper. 4. **Combining modulators:** You are correct, we are sharing the modulation pool across tasks in the continual learning setting. However, for a given input query, only a single set of modulation matrices is selected. What we refer to in Line 359 is that it may be possible to select multiple suitable modulators for a given input query and compose them accordingly. We understand that this distinction was not formulated clearly enough, and revised our formulation accordingly in our manuscript. For example, we refined our wording in Line 361 by omitting the phrase “sharing modulators”, because as you rightly pointed out, the modulation pool is already shared across tasks. **Presentation of experiments**: 5. **PromptDT and VIMA:** Yes, we did conduct experiments with PromptDT (Xu et al., 2022) and VIMA (Jiang et al., 2022). We experimented with them in a single-task setting, in which we trained on Meta-World only. Therefore, we now included the results for the single-task setting in our final version (see Figure 2 in the attached PDF). 6. **L2M-oracle:** Thanks for highlighting this. L2M-oracle obtains the information on what task is currently being observed. We state in line 252: “Moreover, we add another implementation of L2M, which is equipped with an oracle that provides information on what task is currently being observed”. Following line 252, we added a more concrete explanation of L2M-oracle in our methods section: * “This information is provided in terms of the task index. For L2M-oracle, the modulation pool contains as many modulators as there are tasks. The given task index is then used to select the respective modulators. At training time, the task index refers to the dataset the batches are sampled from. At inference time, the task index refers to the environment the DT is currently evaluated in.” 7. **Baseline:** In fact, the suggested baseline is exactly the L2M-oracle baseline, which trains a single set of LoRA weights per task (see previous point). 8. **PEFT on pre-training tasks:** You are right, the pre-trained model is frozen and only the modulators are fine-tuned, thus the performance on pre-training tasks is not affected by PEFT. However, to remain task-agnostic, we train a set of 100 additional keys (see answer to question 3) on the pre-training datasets. At inference time, we concatenate this set to the set of keys introduced by L2M during continual fine-tuning. Therefore, the model does not need to be told if the inputs come from a pre-training or fine-tuning task. If a “pre-training key” is selected, no modulation occurs. The slight performance drop in Figure 7 comes from conflation effects between the pre-training and fine-tuning keys. We updated our manuscript with this additional information. **Minor suggestions:** 9. **Background on L2P:** Thank you for this suggestion. We now include a more detailed description of L2P and distinction to L2M in the background section. Among others, we added additional information on: * The selection of the modulation matrices (per step vs. per sequence). * The usage of the selection count penalty. * The exact loss function to optimize the key. 10. Thanks for pointing this out, we have fixed this mistake. **Limitations of state-space:** The designed state-space is indeed specific to the benchmarks considered in our work. We are aware of this limitation and will add a discussion on this point to our paper. We believe that other benchmarks with differing state spaces, such as RLBench, can be approached similarly. However, we are planning to explore alternative approaches in future work. Overall, we are very grateful for your extensive suggestions. We revised our paper, and we believe that your suggestions improve our paper. Furthermore, we added additional experiments (as included in the one-page PDF), to provide further empirical support. **References**: * Mengdi Xu, Yikang Shen, Shun Zhang, Yuchen Lu, Ding Zhao, Joshua Tenenbaum, and Chuang Gan. Prompting decision transformer for few-shot policy generalization. In International Conference on Machine Learning, pp. 24631–24645. PMLR, 2022. * Yunfan Jiang, Agrim Gupta, Zichen Zhang, Guanzhi Wang, Yongqiang Dou, Yanjun Chen, Li Fei-Fei, Anima Anandkumar, Yuke Zhu, and Linxi Fan. Vima: General robot manipulation with multimodal prompts. arXiv preprint arXiv:2210.03094, 2022. --- Rebuttal Comment 1.1: Comment: I would like to express my appreciation for the detailed response, which has addressed most of my concerns. I have also read other reviews and responses and found Reviewer Aphp also concerns about the clarity. Given that the authors have responded with helpful clarification and made a revision (though I cannot see it due to the policy of NeurIPS this year), I decided to increase my rating to 5.
Summary: The authors study the problem of preventing catastrophic forgetting in DT finetuning. The proposed method leveraged a pool of LORA adaptors and only choose the relevant adaptor matrix during finetuning. The author achieve good results on continual world. Strengths: The applicaiton of lora pools for finetuning DT is novel and the problem of continual learnning in DT is important. The experiment results show that the proposed method works and the released dataset should have a good impact to the community. Weaknesses: The only weakness I think is the presentation of the method. It seems that the paper is largely inspired by Learing to prompt, and as a result, I believe many of the technical details are not explained in here. E.g., the exact loss function to optimize key. Section I should be polished to include more explanation of the method. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: N/A Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback, which helped a lot to improve our manuscript. We appreciate your positive assessment of our work: thank you. We are optimistic that our dataset will contribute to advance the RL research community. Thank you for pointing out the lack of technical details regarding our method. In the revised manuscript, we now included more details and elaborated much more on them: * The exact loss function to optimize the key. * The selection of the modulation matrices (per step vs. per sequence). * The usage of the selection count penalty. To bring more details indeed improved our manuscript a lot.
Summary: The authors propose an adaptation method for pretrained DT (decision transformer) that combines two finetuning techniques, learning-to-prompt and low rank adaptation (L2P + LoRA), which have been investigated in NLP and computer vision domains. This combined method aims at exploiting the benefits of finetuning and prompt-based learning so that adaptation can be achieved parameter-efficiently and without much catastrophic forgetting. Strengths: The authors provide the evaluation and comparison on finetuning technique applications to DT-based RL policies, including full finetuning, finetuning with action head, adapters, LoRA, and prompt-tuning, and prefix-tuning that have been well investigated in NLP and computer vision domains. Based on the evaluation, the authors propose an adaptation method combining L2P and LoRA, by which the pretrained DT can be used to solve new tasks. Weaknesses: The evaluation of different finetuning techniques as survey is meaningful and helpful for readers, but the proposed solution simply combines the two techniques, and little analysis has been conducted on it. There might be some other combination based on L2P, e.g., L2P with IA3, for adaptation. The authors do not explain clearly why multi-domain DT (MDDT) is considered, e.g., MDDT can be useful and effective for learning each domain through shared knowledge and representation. Does the proposed L2P+LoRA get benefited from MDDT? What if a single domain DT was tested? Minor errors: - In line 21, no Section 1 title. - In Figure 4(a), the caption should be CW10. - In Figure 4(b), DMC10 should be DMC6. - In Figure 6(b), Success rate should be Normalized score. - In line 326, a missing citation. - Some citation forms are incorrect. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Are there any other finetuning techniques that can be combined with L2P, similar to L2P with LoRA? HyperDT [1] handles parameter efficiency, where LoRA part is similar. Could the authors compare the proposed solution with HyperDT? [1] Xu, Mengdi, et al. "Hyper-decision transformer for efficient online policy adaptation." arXiv preprint arXiv:2304.08487 (2023). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: No specific statement on limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful feedback. Our manuscript improved considerably by addressing and incorporating your comments. **Analysis:** We are glad that you find the evaluation of fine-tuning techniques meaningful and helpful for readers. Regarding additional analysis on L2M, we already investigated: * the effect of the rank for LoRA (see Figure 17 in Appendix G). * the choice of embedding tokens for the query in L2M (Figure 19 in Appendix F). As per suggestion of another reviewer, we further expanded this ablation study (Figure 1a in the attached PDF). * the selection of modulation targets in L2M (Figure 20 in Appendix F). * as per suggestion of another reviewer, we now also added an investigation in which we vary the embedding history length used to construct the query in L2M (Figure 1b in the attached PDF). We agree that further combinations such as L2M with IA3 (Liu et al., 2022) are of interest for adaptation. In Figure 18, Appendix F, we report the results of this comparison. While using IA3 performs worse, it compares favorably in terms of parameter efficiency. In addition, we provided results for L2P in combination with other prompt-tuning based approaches, such as L2P+Pv2 and L2P+PreT (see Figure 18 in Appendix F). In the revised manuscript, we now highlight these variations more prominently. **Multi-domain DT:** Thank you for pointing this out to motivate the multi-domain setting and why it is highly relevant in RL. In the revised manuscript, we now motivate our multi-domain setting in much more detail. We want to emphasize that L2M is independent of the pre-training paradigm and applicable in both single and multi-domain scenarios. Overall, we believe that the more challenging multi-domain setting is also more interesting for practitioners as it is a more realistic scenario. Having said that, we do agree that a more thorough investigation of the multi-domain setting does strengthen our paper considerably. Therefore, we performed the following additional experiments: * **Single-domain (Figure 2 in the attached PDF):** pre-training and fine-tuning only on Meta-World (MT40+CW10). Due to the common state and action spaces, we used a simpler (non-discretized) action space and training objective (MSE) for this experiment. This results in considerably higher performance scores. However, this comes at the cost of the loss of generality. The single-domain model can only handle the particular state/action space it was trained on. Thus, fine-tuning it to tasks with new state/action spaces is not possible. * **Cross-domain fine-tuning (Figure 3 in the attached PDF):** pre-training on Meta-World only (MT40) and fine-tuning on DMControl (DMC6). Here, we use the same discretized action space as for MDDT to account for different action spaces across domains. Overall, we observe that the fine-tuning performance on DMC6 (different domain) is considerably worse than for the MDDT (Figure 3b). In addition, we also fine-tune the pre-trained single-domain model on CW10 (same domain). The final performance on CW10 is also lower compared to the MDDT model that was pre-trained on both domains. These experiments indicate, that multi-domain pre-training, indeed, has a positive effect on the fine-tuning performance (Figure 3a). **Other fine-tuning techniques:** As discussed above, we agree that investigating other fine-tuning techniques in combination with L2M is interesting, and already conducted this investigation in our submission. Due to space constraints, these comparisons have been relegated to the appendix, see Figure 18 in Appendix F. **HyperDT:** Thank you for suggesting the comparison against HyperDT (Xu et al., 2023). Unfortunately, there is no open-source implementation available yet, but we are working on a reimplementation to include HyperDT in our comparison. We expect HyperDT to work well in the single-task setting, but to fall behind in the continual setting, as it has no mechanism to prevent forgetting. We hope to have clarified all your questions. We revised our paper to highlight the conducted ablation studies, included our additional experiments in the single-domain setting, and aim to add the HyperDT baseline. In case any questions remain, we would be happy to answer them! **References**: * Liu, Haokun, et al. "Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning." Advances in Neural Information Processing Systems 35 (2022): 1950-1965. * Xu, Mengdi, et al. "Hyper-decision transformer for efficient online policy adaptation." arXiv preprint arXiv:2304.08487 (2023). --- Rebuttal Comment 1.1: Comment: I'd like to extend my thanks for the comprehensive response, which addresses the most of the concerns I had, particularly regarding the motivation behind multi-domain DT in this work and the discussion on other fine-tuning techniques with IA3 in the appendix. I would be inclined to raise my score slightly, given these explanations.
Rebuttal 1: Rebuttal: Dear Reviewers, We thank you for your helpful comments, excellent feedback, and generally positive responses! We carefully read your constructive reviews and responded to all your questions and comments. Furthermore, we conducted additional experiments and report the results in the attached PDF. The revised manuscript includes much more details. Thanks again and best regards,\ The Authors Pdf: /pdf/6a929d6b865d8dca0fcd8b08ff564db0d547ed30.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper considers the catastrophic forgetting problem in pre-training and fine-tuning RL setting. The paper proposes Learning-to-Modulate (L2M) to reduce the degradation of pretrained models by modulating the information flow of the frozen pre-trained model via a learnable modulation pool. L2M shows state-of-the-art performance on a continual learning benchmark, while retaining performance on the pre-training tasks. Strengths: + The paper is overall well written and easy to follow. + The paper proposes L2M which is a parameter-efficient fine-tuning and prompt-based tuning prompting method. The method is sound and achieves good results. Weaknesses: - Enhanced baselines: How does L2M measure up against the latest advancements in training methods, such as https://arxiv.org/abs/2211.12740, https://arxiv.org/abs/2301.09816, https://arxiv.org/abs/2211.10869, and https://arxiv.org/abs/2305.16554? These techniques have demonstrated enhanced training of Transformers, resulting in improved generalization and scalability across numerous tasks. Considering the success of these methods in scaling up pretraining, does L2M still hold its relevance? - Absence of weight decay baseline: In the paper, the authors discuss the limitations of commonly used methods that encounter catastrophic forgetting during continual learning. However, it is worth investigating whether the authors explored the effectiveness of weight decay as a preventive measure against overfitting to new tasks. Weight decay, being a straightforward yet potent technique, has proven effective in mitigating overfitting during finetuning processes. Update: The authors have responded with detailed clarification in their rebuttal and most of my concerns are addressed. Thus I increase my rating to 6. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Is the embedding history state only, did the author consider state-action history? How does the embedding history size impact the results? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Yes, the authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback and suggestions that improved our manuscript. **Enhanced baselines:** We agree that the methods, to which the reviewer is referring, are relevant for improving the pre-training stage. However, our method aims at improving the fine-tuning phase, where it prevents forgetting of already learned tasks. Therefore, the referred methods seem orthogonal and might be used in combination with our method to improve overall performance. Consequently, we leave exploring this combination of methods for future work. **Weight decay:** Indeed, we explored weight decay via L2 regularization on weights of previous tasks (see Figure 6 in the manuscript). Weight decay has been investigated by the community for preventing catastrophic forgetting. However, weight decay was found to be inferior to EWC, which we included as a baseline in Figure 6 (Kirkpatrick et al., 2017). Furthermore, weight decay was explored in relation to loss of plasticity and found to be ineffective in its naive form (Dohare et al., 2021; Lyle et al., 2023). We added a discussion regarding weight decay to the revised manuscript, where we elaborate more on the L2 weight decay baseline. **History embedding:** In our experiments, the embedding history only contained state tokens. However, we conducted an ablation study on how to represent the history in Figure 19 in Appendix H. In Appendix H we compare various options and combinations of state/action/reward/RTG tokens to represent the history. We agree with the reviewer that a combination of state and action embeddings must be investigated. Therefore, in the revised manuscript, we now provide results for state-action history as well as state-RTG history (see Figure 1a in the attached PDF). As expected, the state token embeddings perform best. **History size:** The reviewer is right, the dependence on the history size is an important question. In the revised manuscript, we now include an ablation study, where we vary the embedding history size (see Figure 1b in the attached PDF). By default, we used the embedded state tokens of the 5 last timesteps to construct the query. Our manuscript profited a lot by incorporating your suggestions: thank you. **References:** * Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., et al. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521–3526. * Dohare, Shibhansh, Richard S. Sutton, and A. Rupam Mahmood. "Continual backprop: Stochastic gradient descent with persistent randomness." arXiv preprint arXiv:2108.06325 (2021). * Lyle, Clare, et al. "Understanding plasticity in neural networks." arXiv preprint arXiv:2303.01486 (2023). --- Rebuttal Comment 1.1: Title: Rebuttal Acknowledged Comment: I would like to thank the authors for their effort during the rebuttal. I appreciate the clarification on weight decay baselines and running extra experiments to ablate history size.
null
null
null
null
null
null
Feature Adaptation for Sparse Linear Regression
Accept (spotlight)
Summary: This work studies the problem of sparse linear regression under the statistical model where the examples are drawn as zero-mean Gaussians with covariance matrix $\Sigma$ and each response is a t-sparse linear combination of the examples plus i.i.d. Gaussian noise. While classical results establish that the LASSO can computationally efficiently recover a good solution v* with a nearly optimal number of samples when the covariance matrix $\Sigma$ is well-conditioned, guarantees beyond this simple setting are substantially lacking, with a few notable exceptions. In general, not much is known beyond a brute-force approach of trying all $\binom{n}{t}$ sparsity patterns, which requires $n^t$ time. This work shows that even if $\Sigma$ is ill-conditioned, if Sigma is well-conditioned after removing the top or bottom O(1) eigenvalues (i.e. a notion of “robust” well-condtionedness), then this running time can be improved to roughly $f(t) \cdot n^3$. When there are a small number of tiny eigenvalues, the main problem is the existence of sparse linear dependencies among the variables. The algorithmic approach to address this problem is then to iteratively peel off a small number of these variables at a time (IterativePeeling()). This procedure gives a construction of a small dictionary of vectors such that any t-sparse vector can be written as a linear combination of this dictionary with coefficients bounded in L1 (Lemma 2.9), which in turn implies that the LASSO identifies a good solution v in the sense of bounded excess risk. The authors interpret the technique of applying LASSO to an augmented dictionary as “feature adaptation”. Strengths: * Sparse linear regression is a widely studied yet notoriously difficult problem, and this work identifies a very natural class of inputs for which positive results can be obtained. * I find the discussion in Section 2.1 on the proposal of (t, alpha) dictionaries as a canonical abstraction of all existing approaches to sparse linear regression to be very interesting and valuable. It helps concentrate research efforts on this problem to this more structured approach, and I believe it may prove to be influential in following works on sparse linear regression. Weaknesses: * It seems like like there are no bounds on the sparsity of the approximate solution v that is outputted by this algorithm, probably due to the fact that the feature adaptation step augments the feature set with dense linear combinations of the existing variables (please let me know if I have misunderstood). Thus, the measure of the “sparsity” of this algorithm lies in the small sample complexity, rather than its ability to output a sparse solution. Thus, the guarantees of this algorithm are tightly linked to the statistical setting of sparse linear regression, and it may be difficult to adapt these techniques to the problem of outputting a good sparse solution to linear regression. However, I do find this limitation very interesting, and I wonder if there are gaps between the performance of these algorithms if one is required to output a sparse solution. Such separations are known in certain sparse recovery settings (see, e.g., https://arxiv.org/abs/1110.4414). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: n/a Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors have provided adequate discussion of limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and comments, and for appreciating our techniques! The question about sparsity of the estimate is indeed interesting. In general, if we use a feature adaptation approach, some feature may be a dense combination of the original covariates and so we cannot guarantee sparsity in the original basis. Nevertheless, for our main result it should be possible to guarantee some level of sparsity in the original basis by using tools from high-dimensional geometry. This is because (ignoring the final boosting step) the adapted features used in Algorithm 1 are rescalings of the coordinate basis, and because the proof of the result establishes an $\ell_1$-norm bound on the predictor in the rescaled space outside of the set $S$ (which has bounded size) \--- see page 21 of the supplementary material. Given this, one can sparsify the predictor using the Approximate Caratheodory Theorem (Theorem 0.0.2 of Vershynin's book [41]) to ensure sparsity $|S|$ plus a polynomial in the $\ell_1$-norm bound and the reciprocal of the desired prediction error. It should also be possible to achieve a similar sparsity guarantee by replacing the LASSO with an orthogonal matching pursuit method; see theorem 15 and remark 5 of [24]. It's true that the boosting step includes (one) denser feature at each iteration, but this feature corresponds to the predictor at the previous iteration, so it may be possible to iteratively bound the sparsities of the predictors at each iteration. Understanding the guarantees achieved here is an interesting open problem. Another is understanding whether a stronger sparsity guarantee (than that obtained by approximate Caratheodory) is possible in our setting.
Summary: The paper introduces an algorithm to solve sparse regression when the covariates are generated from a normal distribution with ill-conditioned covariance matrix, i.e. outlayer eigenvalues. The algorithm is based on feature augmentation where, meaning that the covariates are completed with well-chosen vectors, and is designed to provably achieve near optimal sample complexity in the studied framework. Strengths: Regarding presentation, the paper is clearly written and presented. Each theoretical result is explained and justified, which makes the paper easy to follow. Regarding the content, the idea of augmenting the features by taking the data distribution into account before solving the Lasso seems new. The paper provides interesting theoretical insights on this method, while maintaining a computational perspective, for instance when explaining the design of the dictionary. Weaknesses: While the main contributions of the paper are theoretical, it would have been interesting to get more experimental details on the algorithm. In particular, numerical illustrations of the behavior with respect to the conditioning of the covariance matrix (number of outlayers, gap between largest and smallest eigenvalues, ...), and of the robustness with respect to standard algorithms like Basis Pursuit (for instance in "phase transitions" between well and ill-conditioned scenarios), may broaden the audience and the impact of the contributions. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: The ideas of experiments below may help provide more information on the practical behavior of the algorithm and put it into perspective with the theoretical results. * Could the authors provide the number of samples necessary to reach low risk with respect to the ratio $\frac{\lambda_{n - d_h}}{\lambda_{d_l+1}}$ and with respect to the number of outlayer eigenvalues when using their algorithm and when using BP ? * Could the authors provide the computation time with respect to the number of samples in addition to the risk ? * Could the authors provide the size of the set $S$ obtained in practice with Iterative Peeling, and compare it to the bounds given in Lemma 2.4 and 2.5, with respect to the sparsity $t$ ? * Could the authors provide a comparison of the performance with and without the knowledge of $\Sigma$ (estimated from samples v. known distribution) ? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: The limitations are discussed by the authors. ----------- After reading the other reviews and the rebuttal, I increased my rating. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and comments. In particular thanks for the good suggestions for experimental directions to consider. It's tricky to understand which instances are the ``worst'' practical instances for the algorithm. However we can give partial answers to some of your questions: First, regarding the size of $S$. We're actually not aware of any instances where in practice the size of $S$ will exceed roughly $d \cdot t$ (number of outlier eigenvalues times sparsity). It would be quite interesting to understand if there is indeed a ``hard'' example for our algorithm or if there is a tighter analysis than what we were able to show. Second, regarding estimating $\Sigma$ using samples. Unfortunately our algorithm does not have a hope of succeeding when the estimated covariance is low-rank, because it needs an accurate estimate of the eigenspaces of $\Sigma$ --- thus, we need at least $\Omega(n)$ unlabelled samples (in addition to the $m$ labelled samples) to have any hope of success. Numerically, on our simple synthetic example (Figure 1) we do find that the algorithm works when we estimate $\Sigma$ using $2n$ samples. Third, regarding numerical runtime. Our algorithm has two parts. The first part is dentifying the set $S$. This has runtime that does not depend on the number of samples, but depends on the dimension due to requiring an eigendecomposition of $\Sigma$; for our example in Figure 1, this step took $0.65$ seconds. The second part of the algorithm is solving the adapted basis pursuit, which had runtime ranging from $0.12$ seconds (on average) when $m = 20$, to $1.8$ seconds when $m = 500$. The scaling was roughly linear in the number of samples. The runtime of the standard basis pursuit was very similar to the runtime of the second part of our algorithm. Finally, regarding the number of outlier eigenvalues. For our simple synthetic example, we observe that adding a second, independent sparse dependency does not double the sample complexity; in fact, it's essentially unchanged even with $10$ independent dependencies. This tracks with our theoretical understanding: the sample complexity is additive between (a) a component due to size of $S$, and (b) a component on the order of $t\log n$. In our setting when $d \leq 10$, it appears that the second component is dominant. --- Rebuttal Comment 1.1: Title: Thanks for the answer Comment: Thanks to the authors for their answer and insights on practical details. After reading the other reviews and the rebuttal, I will increase my rating.
Summary: This paper presents an innovative polynomial-time algorithm for sparse linear regression in the correlated random design setting. The algorithm adapts the Lasso technique to effectively tolerate a limited number of approximate dependencies, resulting in both computational and statistical efficiency for covariance matrices with a few "outlier" eigenvalues. The proposed method is part of a more extensive framework of feature adaptation for sparse linear regression with ill-conditioned covariates and offers the first polynomial-factor improvement over brute-force search for constant sparsity and arbitrary covariance. Strengths: 1. The paper contributes a novel algorithm for sparse linear regression, adeptly adapting the Lasso to accommodate a small number of approximate dependencies. 2. The proposed algorithm exhibits both computational and statistical efficiency for covariance matrices with a few "outlier" eigenvalues, providing a substantial advancement over existing methods in this context. Weaknesses: 1. The paper assumes constant sparsity for simplicity, but it is essential to explore the impact of this assumption on the main results. 2. To more convincingly demonstrate the algorithm's computational and statistical efficiency, the paper would benefit from the inclusion of supplementary numerical simulations. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. The paper assumes constant sparsity $t$ for simplicity. It is unclear whether this assumption is necessary to establish the main results of the paper? In particular, it would be interesting to examine the case where the sparsity $t$ takes the order of $\log n$. 2. It is important to consider the more general case where $t$ is treated as a variable instead of a constant. I wonder whether the main results in the paper remain valid in this case. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and comments. In case it was a point of confusion, we'd like to emphasize that our results do apply when $t$ is a variable, i.e. in Theorems 1.1 and 1.2, there are no factors of $t$ ``hidden'' in any constants. So for example when $t = \log \log n$ our results still yield state-of-the-art sample efficiency / computational efficiency tradeoffs. It is indeed true that when $t = \Omega(\log n)$ our results become vacuous, and it's a very interesting direction for future research whether this limitation can be alleviated further. **Q:** *``To more convincingly demonstrate the algorithm's computational and statistical efficiency, the paper would benefit from the inclusion of supplementary numerical simulations.''* Certainly; this paper was primarily theoretical (with some simple numeric validation) but it would indeed be interesting to eventually apply to real data. --- Rebuttal Comment 1.1: Comment: Thank you for your response.
Summary: This paper studies the correlated random design setting, where the covariates are drawn from a multivariate Gaussian, and seeks an estimator with small excess risk. This work provides a polynomial-time algorithm that, given Σ, automatically adapts the Lasso to tolerate a small number of approximate dependencies, and achieves near-optimal sample complexity for constant sparsity and if Σ has few “outlier” eigenvalues. Strengths: Sparse linear regression is a fundamental problem in high-dimensional statistics. This paper studies a polynomial-time algorithm that automatically adapts the Lasso to tolerate a small number of approximate dependencies. In theoretical analysis, this work achieves near-optimal sample complexity for constant sparsity. The proposed algorithm fits into a broader framework of feature adaptation for sparse linear regression with ill-conditioned covariates. Weaknesses: 1.If this work can provide experimental verification of the superiority of the proposed algorithm, it will be more convincing. For example, compare with the related work to verify the performance of the proposed algorithm in terms of time complexity and accuracy. 2.In figure 1, when the number of samples is greater than 100, is the standard deviation of adapted BP algorithm zero? 3.The presentation of references is not standardized, such as: [38] Sara Van De Geer. On tight bounds for the lasso. Journal of Machine Learning Research, 19:46, 2018. [22] Jonathan Kelner, Frederic Koehler, Raghu Meka, and Ankur Moitra. Learning some popular gaussian graphical models without condition number bounds. In Proceedings of Neural Information Processing Systems (NeurIPS), 2020. 4.The organization and presentation of this paper can be further improved. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: See “Weaknesses”. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and comments. To address their questions: **Q:** *``compare with the related work to verify the performance of the proposed algorithm in terms of time complexity and accuracy.''* We would like to emphasize that *no* prior work has addressed the problem of sparse dependencies among covariates, aside from the very special case where the dependencies have sparsity $2$. We did provide experimental evidence (Figure 1) that our algorithm achieves superior accuracy to Lasso/Basis Pursuit, the algorithm that practitioners would typically try. We did not experimentally compare with e.g. the agglomerative clustering approach mentioned in our related work section, since (as discussed) it's clear from first principles that this approach cannot succeed in our general setting. **Q:** *``In figure 1, when the number of samples is greater than 100, is the standard deviation of adapted BP algorithm zero?''* Yes. For each sample size beyond $100$, in all ten trials our algorithm achieves zero prediction error. Note that this is reasonable since the samples are noiseless. **Q:** *``The organization and presentation of this paper can be further improved.''* If the reviewer has constructive and concrete suggestions for improvement of the organization/presentation, we would love to hear. --- Rebuttal Comment 1.1: Comment: Thank you for your response.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
CS4ML: A general framework for active learning with arbitrary data based on Christoffel functions
Accept (spotlight)
Summary: This paper studies an extremely general notion of active learning in the $\ell_2$ norm. Suppose we are able to actively observe data in many different modalities. "Active Learning" means we can choose where we observe data in a domain, and "different modalities" means there are fundamentally different ways to observe the data. For example, an MRI machine can have many different coils in it, and for each coil we can measure data at a user-specified frequency. The fact that there are different coils with different physical properties corresponds to having "different modalities". The fact that we can choose which frequencies to measure data at corresponds to "active learning". The paper studies an extremely general formalization of active learning in the $\ell_2$ norm with different modalities in active learning. Specifically, it suggests computing a "Generalized Christoffel Function" (i.e. leverage score function) for each modality, and to sample from each modality at random places, chosen with probability proportional to the generalized Christoffel function. The paper allows for an exceptionally general notion of "sampling", where a sample could entail something classical in numerical linear algebra, like observing a row from a matrix. It can also mean something broader, like observing the gradient of a function. Also, while the error in each modality is measured with respect to the $\ell_2$ norm, it is the weighted $\ell_2$ norm with respect to a given measure $\rho$ for each modality. So, the user can choose to have (e.g.) a uniform error metric on a small interval, or to instead have a Gaussian-weighted error measure on the real line. Lastly, the framework also has enough flexibility to give guarantees in some nonlinear spaces, with the main example being the union-of-subspaces. Theoretical guarantees on sample complexity are given, roughly matching the rates that would appear in the special case of leverage score sampling for matrices. Further, many experiments and concrete instantiations of the general framework are presented. Applications vary from approximating function with multivariate polynomials, to optimizing MRI reconstructions, to solving PDEs with physically informed neural nets. Strengths: The paper is also 40 pages of really intense serious research. The appendix is chalk full of interesting special case studies, demonstrating the flexibility of their theory and the practical algorithms it suggests. The paper is so big and full of ideas that the NeurIPS peer review system, where I don't have enough time to read it cover-to-cover, really does this paper a disservice. I'd love to have 2 months to focus on this paper and understand the ins-and-outs of every case study and theorem. However, I don't have that time, so I'll presume the correctness of everything in the appendix and strongly recommend publishing this paper. The paper is exceptionally general, doing a great job of generalizing much of the recent research on leverage score sampling beyond matrices. Lots of existing work has slowly generalized one or two ideas from leverage score sampling to Hilbert spaces or non-linearity or whatever else fits that work's setting. This paper really goes the extra mile and writes out a ridiculously general setting that covers: - Fitting a polynomial to a function, where we can observe both the function and it's gradient - Fitting a polynomial to a function, where we can choose if we want to observe the function or it's gradient at each query (in case one is more expensive than the other, we can consider them separate modalities with separate sample complexities). - Iteratively refined approximations to solving a PDE - Reconstructing MRI data from observations at carefully chosen frequencies The general framework is phrased in terms of normed spaces, Hilbert spaces, measurable functions, and other abstract math that really gives the user freedom to do what the want. The theorems also apply to this extremely general framework, so the user immediately has Christoffel sampling guarantees with a likely near-optimal sample complexity out-of-the-box. I think this paper is original in it's powerful results in extreme generality, and is general in it's applications. The paper's results are high quality sample complexity guarantees. The paper is broadly well written (but this is a bit of a weaker point). The paper should be significant in its expressive power owing to its applicability to many domains. I strongly recommend publishing it. Weaknesses: The paper does suffer a bit on the clarity front. I'm familiar with leverage score sampling (and even some infinite-dimensional generalizations thereof), so I'd think that a generalized Christoffel function framework would be easy enough to understand. This was not the case though, and the extremely general framework was daunting an unapproachable to read about. There's even an appendix subsection devoted to relating the extremely general framework back to the leverage score form I'm comfortable with. However, this description isn't super clear and even has (as far as I can tell) several minor errors which really harm legibility. The general framework (section 2.1) is just a list of math definitions stated without any running example of what each mathematical object is. It's very unintuitive what the difference between a "measurement domain" and a "measurement space" are [Lines 85-87]. I'd strongly recommend the authors decide on a running example to present alongside all of the math definitions, which would make such definitions more intuitively distinct and clear. The experiment and case study examples (section 3) are also pretty hard to approach. Section 3 consistently summarizes ideas at an extremely high level, and pushes the rigorous instantiations of the general framework to the appendix. While this makes sense from a page-limit perspective, it does harm the clarity and reading experience. Unfortunately, under the constraints of the NeurIPS format, I'm not sure a better option is available. Perhaps the authors can avoid this extreme appendicizing in the arXiv version of their paper? Even when reading the appendices in good detail, there's bits and pieces that I don't follow (especially in the MRI example). Everything I understood was clearly correct, but sometimes the technical domain-specific language gets a bit overwhelming making the examples hard to follow. This clarity point is not a game-breaking issue, but another pass at making the technical material more approachable would benefit the paper, especially when it's targeting such a broad audience of both theoretical and empirical data scientists. _As a side note, it's curious to note one particular limitation of the framework -- it cannot explicitly handle ridge regression and ridge leverage scores, which is essential for some prior work on infinite dimensional leverage score sampling, like paper [12] in the references. I don't see any reason why CS4ML couldn't handle ridge regression / ridge leverage scores though._ Technical Quality: 3 good Clarity: 3 good Questions for Authors: First, I have some moderately technical qualms with the paper: 1. In Figure 1, if I understand Appendix B.6 correctly, is it correct that MCS (monte carlo sampling / non-christoffel sampling) is performing worse with more samples because of numerical instability / poor conditioning? Or, like [Line 194] says, is it because the sample complexity of MCS sampling is so bad that the error.... _gets worse_ as the sample complexity grows? 1. The Appendix doesn't seem to have a fully accurate and consistent instantiation of leverage scores into the CS4ML framework. Specifically, $\mathbb{U} = \mathbb{C}$ on [Line 672] seems at odds with $\mathbb{U} \subseteq \mathbb{X}$ when $\mathbb{X} = L\_\rho^2(D)$. I'm pretty sure that $\mathbb{U} = \text{span}(A)$ in leverage score sampling? Also, it feels pretty inconsistent to describe $\mathbb{V}$ as a subset of $\mathbb{R}^N$ but to describe $\mathbb{X}$ as a weighted $L_2$ function space on $[n]$. Understanding that much is already non-obvious from the notation, but would be much clearer if you just wrote $\mathbb{X}=(\mathbb{R}^N, \|\|\cdot\|\|\_{\mathbb{X}})$ where $\|\|\vec v\|\|\_\mathbb{X}^2 = \frac1N \|\|\vec v\|\|_2^2$. Also, there's no reason (afaik) for $\mathbb{Y}$ to be complex instead of real. Further, the sampling function should more explicitly be about looking up an item from a vector, something like $L(y)(\vec u) = \sqrt{p(y)}u(y) = \frac{1}{\sqrt N} [u]\_{y}$. Lastly, from my algebra, I found that $\alpha=\beta=\frac1N$ in equation (2.4) for the leverage score setting. Did I make an error, or is there an adjustment to the instantiation that can instead make $\alpha=\beta=1$? --- After that, I've got smaller qualms / typos / recommended edits: 1. [Line 92] The choice to make $(D_c,\mathcal{D}_c)$ makes for two symbols which look identical at a first (and second) glance. I'd change at least one of these symbols so it's clear that the first and second items in the pair are different. 1. [Line 116] Add parenthesis around 2.1 1. [Line 142] Remove $K=$ 1. [Line 156] Why isn't there rescaling by $\sqrt{p(y)}$ here, like there was in the leverage score appendix's definition of $L$? 1. [Figure 1] Explain the confidence intervals in the caption 1. [Line 168] The placement of "See Section 4.3" is not great, since Section 4.3 is about translating an "in-probability bound" into an "expectation bound". I'd move "See Section 4.3" one sentence earlier. 1. [Line 226] Redefine PINN here. I know it's defined in the abstract, but I'd do it again here. 1. [Figure 3] Add titles to the plots 1. [Lines 230-239] I completely failed to understand what this paragraph is trying to say.... I just don't understand the problem or how it's being solved. 1. [Section 4] Swap $\varepsilon$ and $\delta$. I'd **very heavily** recommend this. The choice of $\delta$ to mean failure probability and $\varepsilon$ to mean accuracy is extremely standard. I would not mess with this; it's very confusing to read. 1. [Line 269] Interpret this big messy norm $\|\|\|\cdot\|\|\|$ somewhere. It looks scary, but I don't think it really should be that scary? 1. [Line 269] Missing $c$ subscript on $\nu$. 1. [Equation 4.1] Explore this sampling complexity. What is it really saying, comparing our sampling distribution to the Christoffel function? Can we use it to get a sorta "coherence assumption" for uniform sampling / MCS? This should be somewhat intuitive, but it really stands to be formally explored a bit more in the writing. 1. [Lines 271-272] Why is it "particularly relevant"? Why's it hard to understand the Christoffel function of the cover but easy to understand the Christoffel function of $\mathbb{U}$. I sorta intuitively believe it's true, but you should justify it. 1. [Line 311] Consider calling $\mathcal{T}_\theta$ a "shrinking operator" maybe? or a "regularization operator" or something? It's not very truncation-like imo, but it's really a small preference. This is a very mild/soft recommendation. 1. [Lines 316-318] Please formalize this added-penalty approach. It's the generalization of ridge regression, which feels very worth exploring. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the referee for their excellent review. ## Weaknesses ### 'The paper...' We agree. **Please see our global rebuttal for discussion and changes we will make.** ### 'There's even an....' **We will edit and move parts of this to the main paper.** See below for details. ### 'The general framework....' A running example is an excellent idea. We think the best example is the classic regression problem. **Please see our global rebuttal.** Regarding the terminology: the 'measurement domain' is how we enumerate the possible measurements. In classic regression, it is the domain of the function. In Fourier sampling (Example 2) it is the set of possible frequencies. The 'measurement space' is the space where the measurements lie. E.g. $\mathbb{R}$ for classic regression. **We will clarify this.** ### 'The experiment....' We agree that this can pose challenges. Our goal in this work is to introduce a general framework, then present three diverse examples to show its broad applicability. Unfortunately, it is hard to describe each example in the page limit, hence we pushed the details to the appendices. **The idea to reorganize the arXiv preprint is an excellent one. We will do this.** ### 'This clarity....' **We will edit the appendices for readability.** ### 'As a side...' Thanks for raising this interesting issue. We are inclined to agree, but would need to investigate further as the proofs in [12] are quite different to ours. This is an interesting aim for future work. ## Questions ### 1. Exactly. The worse performance arises from instability due to poor conditioning. As shown in Fig. 4, cond$(A)$ blows up with $m\approx n\log(n)/(d+1)$. In Fig. 5 we compute how large $m$ should be, given $n$, to ensure a cond$(A)\le tol$. As shown, $m$ needs to grow much more rapidly than $n\log(n)/ (d+1)$ for this to occur. ### 2. Thanks. We think there are two issues here. First, relating (A.1) to Definition 2.4. This is described in lines 672-674, but there were several errors: i) $\mathbb{U}=\mathbb{C}$ in line 672 should have been deleted, ii) it was not stated that $D$ should be equipped with the reference measure $\rho$ (e.g. the Lebesgue measure) that defines the density $p$ via the Radon-Nikodym derivative. There was an unfortunate notation clash, since we also used $\rho$ for the probability measure with density $p$. We will change this probability measure to $\pi$, so that $p=d\pi/d\rho$. iii), in l.674, it should be $\tau(\mathbb{V},p)(y)=K(\mathbb{V})(y)$. **We will fix this discussion.** Second, relating (A.2) to (A.1). We did this in lines 669-671. However, we agree it is easier to directly relate it to Definition 2.4 (a benefit of the general definition). This can be done in several ways, but perhaps the easiest is to set $\mathbb{X}=(\mathbb{R}^N,\|\|\cdot\|\|_{2})$, $\mathbb{Y}=(\mathbb{R},|\cdot|)$, $D=\{1,\ldots,N\}$ with the uniform measure $\rho$ and $L(y)(u)=[u]_y$. Then, with $\mathbb{V}$ as in line 669, $$K(\mathbb{V})(y)=\max_{v\in\mathbb{V},v\neq0}\frac{\|\|L(y)(v)\|\|^2_{\mathbb{Y}}}{\|\|v\|\|^2_{\mathbb{X}}}=\max_{x\neq0}\frac{[Ax]^2_y}{\|\|Ax\|\|^2_2},$$ which is precisely (A.2). Note that in this case, nondegeneracy holds with $\alpha=\beta=1$, since $$\int_D\|\|L(y)(x)\|\|^2_{\mathbb{Y}}d \rho(y)=\sum^{N}_{i=1} (x_i)^2.$$ We get $\alpha=\beta=1$ since we do not impose that $\rho$ is the uniform probability measure. **We will add this derivation.** ## 'After that....' ### 1-3. **We will fix these**. ### 4. Good question. It will hopefully be clear in the revision when we add the classic regression problem. Suppose that $\mathbb{X}=L^2_{\pi}(D)$ for some probability measure $\pi$ with density $p$. Then there are two ways to setup the problem: a) Consider $D$ with the measure $\rho$ that defines $p$ as $p=d\pi/d\rho$. Set $L(y)(u)=\sqrt{p(y)}u(y)$. Then $$ K_a(y):=K(\mathbb{V})(y)=\sup\\{p(y)|v(y)|^2/\|\|v\|\|^2_{\mathbb{X}}:v\in\mathbb{V},v\ne0\\} =\tau(\mathbb{V},p)(y). $$ b) Consider $D$ with $\rho=\pi$ and set $L(y)(u)=u(y)$. Then $$ K_b(y):=K(\mathbb{V})(y)=\sup\\{|v(y)|^2/\|\|v\|\|^2_{\mathbb{X}}:v\in\mathbb{V},v\ne0\\}. $$ Observe that $K_a(y)=K_b(y)p(y)$, so the two terms differ by $p(y)$. However, they lead to exactly the same Christoffel sampling measure $\mu^{\star}$ in (2.6). So the difference boils down to convention, i.e. whether to include $p$ or not. **We will explain this in the revision.** ### 5-8. **We will fix these**. ### 9. **We will rewrite this for clarity.** ### 10. **We will change this.** ### 11. **We will add this (likely in the appendices)** ### 12. **We will fix this.** ### 13. Good point. We think the best way to do this is to compare with MCS. In MCS, (4.1) involves $\mathrm{esssup}_{y\sim\rho_c}K_c(\mathbb{V})(y)$, whereas with CS it involves $\kappa_c(\mathbb{V})=\int K_c(\mathbb{V})(y)d\rho_c(y)$. Thus, the difference relates to the maximal behaviour of $K_c(\mathbb{V})$ versus its integral (i.e. mean). If it is very flat, these bounds are similar. If it has spikes/peaks, they may differ substantially. An instance of this can already been seen in Fig 2, where $K$ is sharply peaked near the zero frequency (in the middle) and decays as frequency increases. **We will add this intuition.** ### 14. Good question. Our thought is that it may be possible to show that there is a cover, but hard to explicitly formulate it, and therefore difficult to sample from (4.2). Generative models are an example. The range of a ReLU generative model is contained in a union of subspaces (lines 1009-1015). But these subspaces are likely not easily quantified. And even if they were, since there are a lot of them, it would be computationally intractable to evaluate the Christoffel function over their union. However, this point was poorly explained. **We will rephrase this sentence.** ### 15. **We will change this.** ### 16. Apologies. We now don't think it is possible to get the same error bound using a regularized estimator. **We will delete the sentence.** --- Rebuttal Comment 1.1: Title: Thanks for the response! Comment: Apologies for the delay in my response. This was a delightful response from the authors, and I remain very very inclined to accept this paper. I look forward to reading this again, with updated notations and intuitions. As an additional side note, it may be interesting to see if the infinite-dimensional work of [[Shustin Avron]](https://arxiv.org/pdf/2104.05687.pdf) on leverage scores for general quasimatrices can be formalized as a special case of your results (when not regularizing so that $\lambda = 0$). No pressure to actually include it, but might be cool to look at, and also might provide proof strategies for ridge regression that work even if [12] doesn't provided anything immediately useful. --- Reply to Comment 1.1.1: Title: Thanks! Comment: Thank you for your kind words and for the link to the interesting paper. We will certainly look into this.
Summary: This paper proposes a general framework for active learning. It claims that the proposed method can help achieve near-optimal sample complexity. In additional to the theoretical results, this paper also present 3 use cases, showing favorable results for the proposed method. Strengths: Important topic with claimed good results. Weaknesses: 1. Poor presentation: Very heavy theoretical setup buried the connection to solving any real active learning problem. No high-level description of the algorithm and its advantages over the existing active learning algorithms. No insights are provided based on the theoretical derivation. 2. Poor experiment comparison: The proposed method was only compared with random sampling. There's no comparison with any existing active learning method. Technical Quality: 1 poor Clarity: 1 poor Questions for Authors: 1. line 19, p. 1: In ML literature, features are represented by $x_i$ instead of $y_i$ while the latter denotes the label, which is $f(y_i)$ in your case. 2. line 38, p. 2: In your example of computational imaging, is one image an example or one pixel an example? Both cases can occur in real applications. 3. line 60, p. 2: Do you mean the label function $f$ is in a Hilbert space? For any function with a set of real-valued parameters (e.g., a neural network), if it's represented by a vector of its real-valued parameters, is such a function in a Hilbert space? 4. line 61, p. 2: What does "data" in "data arises" mean? Examples of features and label pair? 5. line 179, p. 5: You claim "there are many settings where one can acquire both function values and values of its gradient." Can you give some real examples of such settings? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 1 poor Presentation: 1 poor Contribution: 1 poor Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the referee for their informative feedback. ## Weaknesses ### 1. 'Very heavy theoretical...' The heavy setup was also noted as a weakness by other reviewers. **As discussed in our global rebuttal, we will edit Section 2 to improve this presentation. In particular, we will add the running example of the standard regression problem.** This should greatly improve the presentation and help connect it to what the reader is familiar with. ### 'No high-level...' We agree that the paper lacks a high-level description of the algorithm. However, what we propose is more of a 'framework' than an 'algorithm'. We do think Theorem 2.5 neatly summarizes the main facets of the approach. **If the referee thinks this can be made clearer, we are happy to hear suggestions.** The reason we do not consider this an 'algorithm' is because there is no general prescription for how to sample from the optimal measure (2.6). This is the main computational hurdle to tackle in practice. As our examples show, how to do this (or, indeed, whether it is possible) is very problem dependent. Nonetheless, this is a important point. **We will add a sentence to the conclusion about this.** ### 'and its advantages...' In terms of the advantage over other active learning strategies, we are currently unaware of any work that addresses the broad class of problems that our framework does. **Please see our global rebuttal for more discussion on this.** Our work essentially shows that one can extend well-known leverage score sampling to a very broad class of problems, with theoretical guarantees. **We will add a sentence to this effect in Section 1.2.** ### 'No insights are...' We already give several insights in the paper. Directly below Theorem 2.5 we explain how one obtains near-optimal log-linear sample complexity whenever $\mathbb{U}$ is a subspace, or a union of a few (i.e., $d \ll \infty$) subspaces. This insight explains how our work extends classical leverage score sampling to a much more general setting. Further insights are also given in Remark 4.5. Finally, in Section 5 we return to the union-of-subspaces case with further discussion. Section A.4 (mentioned therein) also gives additional insight. We agree, however, that these theoretical contributions are hard to parse and spread over too many sections. **Referees tt34 and n7M7 also had some suggestions to improve this presentation, which we will implement. We will (i) move the discussion on sample complexity in Section 5 (lines 331-335) and consolidate it into an expanded discussion directly below Theorem 2.5, and (ii) add a brief comparison of the sample complexity bound for CS versus MCS in Section 4.2, demonstrating how this corresponds to the difference between the mean value of the Christoffel function and its maximum.** ### 2. This is a good point. **Please see our global rebuttal for discussion of this comment and the actions we intend to take.** ## Questions ### 1. We agree. This was also noted by reviewer tt34, along with other notational comments. We will change this to $y_{ic} = L(\theta_{ic})(f^*) + e_{ic}$. We prefer to use $\theta$'s to represent the measurement location: $x$'s often denote spatial locations, whereas in our work they could be, e.g., frequencies. ### 2. Good point. The short answer is neither. In Fourier imaging, an 'example' is a frequency. Adopting the notational changes proposed in our rebuttal to reviewer tt34, in this problem we wish to reconstruct a (vectorized) target image $f^* \in \mathbb{C}^N$ from measurements $y = A f^* + e$. Here $A = P_{\Omega} F$ is a subsampled Fourier matrix, where $\Omega \subseteq \{1,\ldots,N\}$ is a set of $m$ frequencies. In the active learning problem, we wish to select $\Omega$ (i.e., select frequencies) to recover $f^*$ as well as possible from $y$. This is detailed in Appendix C, but we agree it could be clearer both there and in Section 3.2 of the main paper. **We will add a few sentences for clarity, including the above explanation of what an example is in this context.** ### 3. Yes, the target object $f^*$ is an element of the (abstract) Hilbert space $\mathbb{X}$. In classical regression (see lines 19-22), $\mathbb{X} = L^2_{\rho}(D)$ for some measure $\rho$, i.e., the space of square-integrable functions over $D$ wrt $\rho$. The reason for considering abstract Hilbert spaces is motivated by applications where the target object may not be a scalar-valued function. For example, in parametric PDE learning problems in UQ (lines 47-48), the target object is a function that takes values in a Hilbert space. In this case, $\mathbb{X}$ is a Bochner space. Similarly in operator learning, the target object is an operator, not a function. **As we mentioned above, we will add some sentences in Section 1.2 to better motivate this generalization. We will also add the classic regression problem as an example in Section 2.** We are not sure what the referee means by their question. If the function $f_{\theta} : D \rightarrow \mathbb{R}$ was square-integrable over its domain then it would belong to the Hilbert space $L^2(D)$. This doesn't seem restrictive, e.g., it holds for neural networks with (piecewise) continuous activation functions over compact domains. **We would welcome a clarification of this question.** ### 4. Good point. As stated in line 19 (modulo the new notation), in the classical setting by 'data' we mean pairs $(\theta_i , f^*(\theta_i))$. In our general framework, the data is pairs $(\theta_{ic},L_c(\theta_{ic})(f^*))$. The main point is that rather than function samples, i.e., $f^*(\theta_i)$, of the target object, we consider linear operators, i.e., $L_c(\theta_{ic})(f^*)$. We agree this was not sufficiently clear. **We will edit lines 96-100 to clarify exactly what the training data is.** ### 5. These were mentioned in lines 46-50. However, we agree the reader may well have forgotten this by line 179. **We will add a reference back to Section 1.1 in line 179.**
Summary: The paper proposes a framework for active learning (here meaning that the user controls the sampling strategy according to which the locations of the measurements are made) designed to handle various cases of regression (vector-valued, multimodal, and more). To do so, it introduces the concept of generalized Christoffel functions, that will drive the choice of sampling strategies. A statistical analysis of the risk associated to the empirical risk minimizer associated to the square loss is conducted, showing that a log-linear number of observations (with respect to quantities related to the geometry of the hypothesis space) allows for efficient training using the Christoffel sampling strategy. Numerical experiments on three different problems (all cast inside the cs4ml framework) complement the approach, showing improvements over the naive Monte-Carlo sampling. Strengths: - The proposed approach is novel and encompasses many valuable problems to the machine learning community. - Mathematical details are sound. - Experiments show clear improvements. - The appendix contains a lot of remarks, additional experiments, that further develop the framework. - I share the author's enthusiasm regarding the potential of the framework, which is far from explored. Weaknesses: - It sometimes feels that the paper is overly complicated for the sake of generality, without benefits (see questions). - While fairly general, the framework assumes that the user has access to the data generation strategy (not the targets, but the inputs) which is often not the case in practice. - Overall, the paper is very dense and notation heavy. It is impossible to read it without a constant back and forth to the (31 pages long) appendix, which makes the reviewer wonder why the 9 pages limit exists for this conference. In my opinion, it would be beneficial to have a longer (i.e. journal) version of the paper, as the supplementary contains many interesting remarks that help a lot to the understanding of the ideas. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: I found the notation to be quite heavy and not intuitive at all. I suggest to change the name convention adopted here - while all notation choices are arbitrary, they need to be driven by simplicity and logic. In particular: - I suggest adopting $x_{ic}$ for the measurement locations, to be conform to the classical $y = f(x) + \epsilon$ regression statistical model. Anything evoking something we usually put inside a function call or put measures on is ok ($\theta_{ic}$ ?). - It feels weird not to have the random variable $Y_c$ living in the space $\mathbb{Y}$. Measurements could be $y_{ic}$ instead of $b_{ic}$. - To lower the number of different letters, the true function associated to the generating process could be $f^\star$, and the candidates could be $f$ (instead of $u$), keeping the estimate $\hat{f}$. - In the regression literature, hypothesis spaces are usually referred to as $H$ or $F$. If the measurements locations are $x$, the name of the object space has to change as well. - In definition 2.4, you introduce yet another variable name $\mathbb{V}$ that could just be the variable name of the hypothesis space, to improve clarity. Other questions/comments: - Where do you use non-boundedness of the measurement operators ? Could you not use the standard notation that elements of $\mathcal{L}(\mathbb{X}, \mathbb{Y}_c)$ are bounded linear operators ? Usually unbounded operators are not defined on the whole space $\mathbb{X}$ but only on their domain. - Could you comment on the need to parameterize the measures using two other measures ? $\mathrm{d}\mu_c(y) = \nu_c(y) \mathrm{d}\rho_cc(y)$ ? In the end, we first choose $\rho$ then take $\nu$ according to the CS strategy, so what is the influence different choices of $(\rho, \nu)$ yielding the same $\mu$ ? - Empirical nondegeneracy as presented in Eq. 2.5 is a void concept, unless $\alpha' \approx \alpha$ and $\beta' \approx \beta$ are made precise. - In Theorem 2.5, the symbol you use for the estimator is not defined until way too long after. - The notion of optimal sample complexity is not well explained enough and I struggle to see exactly with respect to what it is optimal. How can it be optimal given the crude estimate for the sum of Christoffel functions ? - Section 3.1: Have you studied the impact of the choice of $\rho$ on numerical performances ? What if $\rho$ were a laplace measure ? How do you choose $f$ in the experiments ? Also, why not write $(y_i, f(y_i), \nabla f (y_i))$ ? This avoids the supercharge of the subscript notation in line 752. - Proposition 4.3, 4.4 could be renamed corollaries or examples. Typos: - Line 61: vector-valued instead of vector space valued - In assumption 2.2, the mappings $L_c$ are from $D_c$ to $\mathbb{Y}_c$. (also in definition 2.4, check the whole text) - In Eq. (2.2) and (2.3) it should be $\forall u \in \mathbb{U}$. - "Hence, the sampling operators and measures preserve the X-norm in expectation" is very vague. - Appendix B1: line 759: no need for the complex conjugate. Why do you need to work in C after having defined your function as R-valued ? Line 763 rho is the same measure for all indexes k. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Most limitations are discussed in the paper. I would also like to mention that the cs4ml framework is not suited to all regression problems, but only to those where one controls the sampling strategy. Alas, in practice you seldom know $\rho$, because the data distribution is often unknown. (Correct me if I'm wrong) Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the referee for their excellent feedback. ## Weaknesses ### Bullet 1 We agree that the framework is complicated. This was raised by other referees and we have discussed in our global rebuttal. In brief, we believe it is justified given the range of applications we address. However, we also agree this justification needs to be more clearly made. **Please see our global rebuttal for the changes we will make in this regard.** ### Bullet 2 We are not sure what the referee means here. Our main assumption is that, for each $c$, we can query $L_c(y)(f)$ for any $y \in D_c$. This is essentially the same assumption as in standard active learning, where $L_c(y)(f) = f(y)$. Note this assumption holds for Examples 1-3. We agree that this is worth clarifying. **We will add this as an Assumption in Section 2 and remark that it holds for our examples.** The addition of the classic regression problem to Section 2 will also help clarify this point. ### Bullet 3 This issue was also raised by other referees. We agree, and we will address this. **Please see our global rebuttal.** Regarding a journal version, referee n7M7 suggested that we edit the arXiv preprint so that it was not so appendix heavy. **We will do this.** ## Questions **We agree, and we will change this as follows.** We will rename the measurement locations as $\theta_{ic}$. We prefer not to use $x_{ic}$, since $x$ is often used to denote a spatial locations, whereas our samples could be, e.g., frequencies. $\Theta_{ic}$ will the corresponding random variable, instead of $Y_{ic}$. The measurements will be $y_{ic} = L(\Theta_{ic})(f^*) + e_{ic}$. We will write $\mathbb{F}$ for the hypothesis set, with elements $f \in \mathbb{F}$, object $f^*$ and estimator $\hat{f}$. We will also change Definition 2.4 to use $\mathbb{F}$. ## Other questions/comments ### Bullet 1 This was done to allow for the pointwise sampling operator $f \mapsto f(y)$, which is unbounded over $L^2_{\rho}(D)$. However, we agree it is problematic. After considering it, we have concluded it is best to assume that $f^*$ and $\mathbb{F}$ lie in some normed vector subspace $\mathbb{X}_0$ of $\mathbb{X}$ and define $L_c : D_c \rightarrow \mathcal{B}(\mathbb{X}_0,\mathbb{Y}_c)$, so that $L_c(\theta_c)$ is a bounded linear operator. In the case of pointwise sampling, one can, e.g., let $\mathbb{X}_0$ be the space of continuous functions on $\bar{D}$. **We will change this.** It does not alter the main results/proofs significantly. ### Bullet 2 Good question. For each $c$ the set $D_c$ generates the samples as $L_c(\theta_c)(f^*)$ for $\theta_c \in D_c$. The approach pursued in this paper is to find an ‘optimal' probability measure $\mu_c$ on $D_c$ (in the sense of sample complexity). To do this, we first assume there is a measure space $(D_c,\mathcal{D_c},\rho_c)$. Then in Assumption 2.1 we assume that the candidate $\mu_c$'s are absolutely continuous wrt $\rho_c$ with Radon-Nikodym derivatives $\nu_c$ that are positive a.e. This is done so that the search for the optimal $\mu_c$ is reduced to finding an optimal a.e. positive function $\nu_c$ that satisfies $\int_{D_c} \nu_c(y) d \rho_c(y) = 1$. We do this by showing (e.g. Theorem 4.2) how $\nu_c$ affects the sample complexity, and then optimize this bound (Section 4.2). Note that $\rho_c$ is often not something we can choose. For example, in standard regression, $\mathbb{X} = L^2_{\rho}(D)$ and $L(y)(f) = f(y)$. One can think of $\mathbb{X}$ as ‘the space in which we measure the error’, in which case $\rho$ is usually dictated to us. In particular, even though we draw samples according to $\mu$, the error is still measured wrt $\rho$. Typical examples here are the uniform measure on bounded domains or the Gaussian measure on $\mathbb{R}^d$ (as we use in Example 1). It is worth explaining this. As noted, **we will add the classic regression problem as an example in Section 2. We will also comment on $\rho$ and $\mu$ therein.** ### Bullet 3 **We will change this to say 'empirical nondegeneracy holds with constant $0 < \delta < 1$ if (2.5) holds with $\alpha' = (1-\delta) \alpha$ and $\beta' = (1+\delta) \beta$'.** This is what we prove in Theorem 4.2. ### Bullet 4 **We will move the definition of $\check{f}$ in Theorem 4.8 earlier in the paper.** ### Bullet 5 For a single ($d = 1$) or for a small number ($d \approx 1$) of subspaces the bound (2.8) is near-optimal, i.e. log-linear in $n$. However, it is highly suboptimal for $d \gg 1$. We noted this in lines 331-335. However, we agree the reader may well miss this. **We will move this discussion to directly after Theorem 2.5 to clarify.** ### Bullet 6 Good question. We have not done this. For the applications that motivated this example, namely regression problems in UQ, it is typical to use the Gaussian measure. We have not seen the Laplace measure used in such applications before. An advantage of the Gaussian measure is that the orthogonal polynomials are tensor-products of the 1D Hermite polynomials, and we also have explicit expressions for their gradients. This allows us to solve the training problem numerically (see Sections B.2-B.4). Note that $\rho$ also dictates the norm in which the error is measured, i.e., the $L^2_{\rho}$-norm. Therefore, it is unclear how best to compare different choices of it. The test function in Figure 1 was chosen as a simple example to demonstrate the main effects. We could show other test functions, with similar outcomes. **If the referee thinks this would be beneficial, we can add these to the Appendices.** **We agree about the notation and will change it.** ### Bullet 7 **We will change them to corollaries.** ## Typos ### Bullets 1-3 **We will fix these.** ### Bullet 4 **We agree and will delete this.** ### Bullet 5 **We will correct these typos.** ## Limitations We agree, but unless we have misunderstood, we believe that this is true of active learning in general. **As noted above we will add this as an assumption in Section 2.** --- Rebuttal Comment 1.1: Title: Acknowledging the rebuttal Comment: I thank the authors for their detailed answer. I have no doubt that the paper's quality will be greatly improved once the proposed changes will be implemented, and I have raised my score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for your positive response!
Summary: The authors introduce an active learning framework for regression problems based on the concept of generalized Christoffel functions. The proposed approach is applicable to a broad range scenarios and it is evaluated on several scientific computing tasks. Strengths: The manuscript present a comprehensive explanation over the main ideas behind the proposed CS4ML framework. The main motivation is relevant (active learning for diverse data scenarios) and the authors point promising directions for further investigations. The work is well organized, which is especially important for theory focused contributions. I also praise the detailed provided supplementary material. Weaknesses: The experimental section, although tackling diverse tasks, unfortunately compares the proposal only to an inactive learning strategy (Monte Carlo sampling), which diminishes the relative impact of the introduced method. I appreciate this point being highlighted as a limitation by the authors, but it does leave the reader wanting a bit more. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Although the manuscript lists 118 references, I think it lacks a more thorough in-text discussion with other available active learning strategies. Which literature gaps are addressed? Which scenarios are not entirely covered by standard methods? As a final minor observation, the left plot in Fig. 2 present some difficult to distinct line patterns which could be improved. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The manuscript states sufficiently its limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the referee for these excellent comments. We will discuss them in reverse order: ### ``Although the manuscript....'' In terms of active learning, the main contribution of this article is to extend certain active learning techniques (namely, leverage score sampling) to much broader types of measurements. The majority of past work on active learning considers pointwise samples of a target function. In fact, we are unaware of any works that systematically tackle more general types of measurements like we do in this paper. This is the main gap in the literature we aim to address. We fully agree, though, there should have been a more thorough in-text discussion of the active learning literature in the paper. Some of this discussion can be found in Appendix A. **As also discussed in our global rebuttal, we will move parts of this to Sections 1 and 2 of the final paper. We will also add more discussion as relevant.** ### ``The experimental section,...'' For the reasons described above, we are not aware of other active learning techniques for which we could make a 'global' comparison across all three examples. As we commented briefly in the conclusion section (Section 5), there has been much previous research on problem-specific active learning strategies that outperform Monte Carlo sampling. However, the main aim of our article is not necessarily to achieve state-of-the-art performance in each example. Rather, it shows how a single active learning technique (Christoffel sampling), which also comes with theoretical guarantees, can improve against inactive learning across a broad spectrum of problems. As we mentioned in our global rebuttal, it is important to compare different strategies. But we think such a comparison is well beyond the scope of the current article. It is likely that comprehensive comparisons in each of our three applications could each make for paper on their own. For example, there have been a slew of recent papers on sampling strategies for PINNs (Example 3 in our paper). We cite these in Section 5. To perform a robust comparison, we would need to consider each method in comparison to CS across a range of different PDEs, rather than just the Burger's equation problem we consider in this article to show proof-of-concept. This is certainly worth doing, due to high level of current interest in PINNs for PDEs. Similarly in MRI, there has been a large amount of past research on sampling strategy design (we also cite this in Section 5). A thorough comparison would need to consider different datasets -- i.e., not just brain images, but other types of common MR image datasets such as knee or abdomen images -- and different MRI modalities (i.e., single versus parallel MRI), as well as different generative neural network architectures. As also mentioned in our global rebuttal, another interesting question for future work is whether one could use CS as a starting point for devising even more advanced active learning methods. In general, this could involve using either the linear-sample sparsification techniques of [32], as briefly mentioned in Section 5, or by using 'boosting' techniques (Haberstich, Nouy \& Perrin). Or, there could be domain-specific tools one could employ to improve the performance of CS in a particular application (e.g., PINNs or computational imaging). In summary, we think that these are very interesting questions for future work, but that they are well beyond what we can feasibly achieve in this article. Nonetheless, it is worth elaborating on this matter in the revision. **To address this issue, we will expand the discussion in Section 5 on other active learning strategies for the main examples and re-emphasize the main contributions of CS4ML to this problem.** ### ``As a final...'' Good point. We will also improve the left plot of Figure 2. --- Rebuttal Comment 1.1: Comment: I thank the authors for the answers. I believe the addressed issues and modifications proposed across all the reviews will greatly improve the understanding and the contribution of the work. Thus, I maintain my acceptance rating. --- Reply to Comment 1.1.1: Comment: Thank you for your kind words.
Rebuttal 1: Rebuttal: We thank the referees heartily for their insightful comments and the time and effort they put into carefully reviewing our manuscript. Each referee has made insightful comments that will undoubtedly improve the final version of the paper. We have provided detailed responses to each review separately below. However, in this global rebuttal we want to also discuss some of the main comments raised by the referees. Note that in this and other rebuttals, **all specific changes we intend to make in the final article are in bold. We do not anticipate that these changes will exceed the one additional page allowed in the camera-ready version of the paper.** ### Experimental comparison/clarification of contribution (referees 9pYz, 7Xvg): Several referees commented that we only compared the proposed method against inactive learning (Monte Carlo sampling) and not other active learning techniques. It is certainly the case that for each of our three main examples, there has been much previous research on problem-specific active learning strategies. We commented on these briefly in the conclusion (Section 5). However, the main aim of our article was not necessarily to achieve the absolute state-of-the-art performance in each example. Rather, it shows how a single active learning technique (Christoffel sampling), which also comes with theoretical guarantees, can improve against inactive learning across a broad spectrum of problems. In particular, we extend classical leverage score sampling to a much broader class of problems, while maintaining its theoretical guarantees. We are unaware of any other current active learning strategy that can simultaneously address such a broad class of problems. **We will add a sentence to this effect in Section 1.2 (Contributions).** We very much agree, however, that it is important to compare different strategies. But we think such a comparison is well beyond the scope of the current article. It is likely that a comprehensive comparison in each separate application could make for a paper on its own. We are interested in doing this, especially to see how well CS performs against other techniques. A related, and interesting, question for future work is whether one could use CS as a starting point for even more advanced active learning methods, e.g., by using the linear-sample sparsification techniques of [32], as briefly mentioned in Section 5, or by using 'boosting' techniques (Haberstich, Nouy \& Perrin). **To address this issue, we will expand the discussion in Section 5 (Conclusions, limitations and future work) on other active learning strategies for the main examples and re-emphasize the main contributions of CS4ML to this problem.** ### Dense presentation (referees tt34, 7Xvg, n7M7): Several referees found the presentation of the framework in Section 2 dense and difficult to follow. We agree and will make a series of changes to address this. First, **referees tt34 and n7M7 also proposed notational changes to improve readability, which we will implement. Second, as recommended by referee n7M7, we will add the classical regression problem as a running example in Section 2 (The CS4ML framework and main results) to clarify how this sits within the general framework.** This will make the generalizations we introduce easier to understand and motivate. **As also suggested, we will also move parts of the discussion on leverage score sampling from Section A.2 to Section 2.** Many readers are likely familiar with this leverage scores, therefore doing this will greatly aid readability. Overall, we believe these changes will make Section 2 much clearer for the reader. ### Complicated framework (referees tt34,n7M7): Several referees commented that the framework we propose is complicated/daunting due to its generality without proper justification. We agree that the framework we propose is very general and the notation can feel quite heavy. As noted above, **we will implement a number of notational changes for clarity.** In terms of the formulation of the framework, however, we think its generality is justified by the broad range of applications it can address. The main ways our framework generalizes standard active learning is that i) we allow the target object to an element of an arbitrary Hilbert space $\mathbb{X}$, ii) we allow arbitrary linear sampling operators, and measurements which are scalar or vector-valued, and iii) we allow for potentially multimodal data. i) is a useful extension, and not too difficult to achieve. Both ii) and iii) are motivated by the examples. ii) allows one to consider sampling with, e.g., the Fourier or Radon transform, both of which are very common in computational imaging (e.g., MRI reconstruction in Example 2 of the paper). It also allows vector-valued measurements, which arise in both Examples 1 and 2 of the paper. Finally, multimodal data is found in many real-world applications, and is encountered in Example 3, as well as practically-important extensions of Examples 1 and 2 (see Sections B.7 and C.7, respectively). We discussed these motivations in Section 1.1 (Motivations) and later in Section 3 (Examples and numerical experiments) when we describe the three examples. However, they are worth elaborating for clarity. **We will add a short discussion in Section 1.2 (Contributions) to explain lines 60-63 and thereby justify the generalizations. Further, as discussed above, we will also add the classic regression problem as an example in Section 2.** This will allow us to better justify the generalizations there as well.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Enhancing Adaptive History Reserving by Spiking Convolutional Block Attention Module in Recurrent Neural Networks
Accept (poster)
Summary: The present study introduces a novel model of spiking recurrent neural networks, which incorporates a spike convolutional block attention mechanism. This model is referred to as SRNN-SCBAM. The primary objective is to effectively incorporate historical information into the spatial and temporal characteristics of spatiotemporal patterns, resulting in improved memory retrieval and elimination of redundant historical data. The efficacy of the proposed model in leveraging historical information and attaining high precision has been validated through experiments conducted on DVS128-Gesture datasets. Strengths: The practical issue of invoking adaptive memory in spiking recurrent neural networks has been addressed in this paper. The primary objective is to incorporate a Spiking Convolutional Block Attention Module into the gating computation of Spiking ConvLSTM networks. This would enable the selective activation of historical information and the elimination of superfluous history during the training phase. This will provide the iterative calculation with a favorable initial state. The motivation is interesting and the idea of utilizing a spiking convolutional block attention mechanism to acquire historical data in gating computation appears intriguing. Weaknesses: 1.More ablations of the different gates being open or closed in Table 1 are required because the influence of ‘i’ is unclear. And if there exist the adaptive parameters to control the possibility of gate open, could it be better for RSNN to achieve better performance? 2.There is a typo in Figure 4. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1.When encounter a new dataset, how the model works and how the architecture should be designed? I don't see discussions of this part in the paper. 2.More ablations of the different gates being open or closed in Table 1 are required because the influence of ‘i’ is unclear. And if there exist the adaptive parameters to control the possibility of gate open, could it be better for RSNN to achieve better performance? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The limitations are the same as the weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful comments and insights regarding our article proposing the SRNN-SCBAM model. We would like to address the points raised and provide further clarification on certain aspects. **W1:**"More ablations of the different gates being open or closed in Table 1 "? **A:**As you suggested, we conduct additional ablation experiments in our study to further explore the effects of different open or closed gates as shown in Table 1. In our subsequent work, we will also investigate the potential of utilizing adaptive parameters or dynamic sub-networks to control gate openings and close. |    Method | f | i | o | Accuracy | |-----------|---|---|---|-----| | RSNN-SCBAM | 0 | 1 | 0 | 91.3| | RSNN-SCBAM | 1 | 1 | 0 | 92.0| We will incorporate all feedback and correct all typos and incorporate suggestions. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I would like to thank for the response. My concerns have been addressed.
Summary: This study proposed the spiking recurrent neural networks model with a spiking convolutional block attention module component (SRNN-SCBAM). The proposed model invokes the historic information in spatial and temporal channels adaptively through spiking CBAM, which brings the advantages of efficient memory calling and history redundancy elimination. The experimental results show that the proposed SRNN-SCBAM model makes better use of the historic information in spatial and temporal dimensions with less memory space, and achieves higher accuracy compared to other models. Strengths: 1. The learning algorithm for recurrent SNNs is simple and easy-to-understand. 2. Well-thought-through reuse of standard modeling and algorithmic components to solve the memory calling and history redundancy elimination in recurrent SNNs. Weaknesses: Although the motivation is reasonable and the proposed methods are simple and easy to implement, the experimental results could be supplemented to indicate the advantages of the proposed model. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Table 2: What is the network structure in [23]? The network structures should be presented more clearly to show the components in the network models. 2. Why the performance of Spiking ConvLSTM with Spiking CBAM is quite higher than that with CBAM in Table 1? 3. What is the parameter of alpha used in Equation (7)? 4. It seems the proposed Spiking ConvLSTM is suitable for long-term time-series datasets due to its adaptive memory maintenance. Could the proposed model be applied to other time-series datasets? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Some details of training the Recurrent SNNs are missing in the paper (only mentioned by a few sentences). It would be better to also include those settings in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1:**"Table 2: What is the network structure in [23]? " **A:**The network architecture employed in Table 2 [23] is as follows: 128C3(Encoding)-128C3-AP2-128C3-256C3-AP2-1024FC-Voting. We supplement the network structures in Table 2 as follows: Table 2 [19] adopts three layers of time surface prototypes. Table 2 [20] consists of the following three modules: a motion detector, a bank of binary feature extractors, and a Random Ferns classifier. The number of ferns for CIFAR10-DVS is (50, 14), and there are 15,000 segment events with a patch size range between 12 and 20. Table 2 [21] Table 2 [21] adopts the p-N-DRAW recognition network, using pLSTM layers as encoders. Table 2 [22] employs the temporal surface representation method with local memory, which divides all event points within a spatial region into different fixed cells. Within each cell, all event points are accumulated to generate a temporal surface, obtaining features. Classification is achieved using an SVM (Support Vector Machine). **Q2:**"Why the performance of Spiking ConvLSTM with Spiking CBAM is quite higher than that with CBAM in Table 1? **A:**Firstly, the transmission in SNN is in the form of spike signals. The compatibility between ANN's CBAM and RSNN might not be particularly suitable to process event-based data. Secondly, in spiking attention, the MultiStepLIFNode is utilized, which leads to the higher performance of Spiking ConvLSTM with Spiking CBAM compared to using conventional CBAM. **Q3:**"What is the parameter of alpha used in Equation (7)?" **A:**$\alpha$ is the parameter that controls the smoothness of the gradient during backpropagation for surrogate gradient learning process, which is set to be 4.0 in this paper. Currently, it appears that the Spiking ConvLSTM could be applicable to other DVS (Dynamic Vision Sensor) datasets. Exploring its applicability to other time-series datasets is also a focus of our future work. **Q4:**"It seems the proposed Spiking ConvLSTM is suitable for long-term time-series datasets due to its adaptive memory maintenance. Could the proposed model be applied to other time-series datasets?" **A:**Yes, the proposed model can be applied to other time-series datasets. The proposed Spiking ConvLSTM is good at the different time-series data and applications, such as the object tracking, and it is worth noting that the proposed Spiking ConvLSTM needs the spike-based input, hence, when facing other time-series data, the input data should firstly be encoded into spike trains. --- Rebuttal Comment 1.1: Comment: I have read the response. All my concerns have been well addressed.
Summary: The authors proposed a Recurrent spiking neural network (RSNN). The essential component of the proposed RSNN is a spiking conv block attention module (SCBAM), which contains channel and spatial attention blocks. The proposed method is validated with classification tasks on CIFAR10-DVS and DVS128 gesture datasets. Strengths: N/A Weaknesses: -1. The writing of the submission should be dramatically improved. Many things are hard to follow. In lines 242-243: "Although the ...is lower than LIAF-Net, ...". In Table 2, [23] provides lower accuracy than the proposed method. Therefore, I cannot really trust the provided experimental results. -2. lines 114-115: "Spiking Convlstm is more suitable for removing facial expression data...." Why? -3. SNN is different from CNN, which can process temporal information without any treatment, especially since many works have proved that, such as [R-1], [R-2]. The process of membrane potential release, accumulation, and triggering spiking is a natural temporal extractor. I do not think using RNN-based architecture is a meaningful way to go in the SNN domain. At the minimum, the authors should provide experimental results of the comparison with these methods, such as [R-1] and [R-2]. -4. How does the potential threshold of LIF impact the performance of the proposed methods? -5. How does the number of LSTM steps impact the performance of the proposed methods? [R-1]: Biologically Inspired Dynamic Thresholds for Spiking neural networks. [R-2]: Spiking Transformers for Event-based Single object Tracking Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: Please see the weaknesses section. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 1 poor Contribution: 1 poor Limitations: No limitation is provided. From my perspective, the work follows the thoughts of ANN without an insightful understanding of SNN, especially the temporal processing power of SNNs. SNNs offer compelling temporal feature extraction capability without any special treatment, evidenced by many works already. In addition, I did not see the novelty of the proposed approach from either ANN or SNN. Therefore, I think the submission is way below the bar of NeurIPS. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer KuzU, Thank you for the thorough review and constructive criticism. There exist some misunderstanding about our paper, we hope the following the clarification would solve the proposed problems. **W1:**"the mismatch about the description about comparison with LIAF-NET" ? **A:**Thanks for pointing that. Actually, [23] contains two different model settings of LIF and LIAF with two different accuracies of 63.53% and 70.40% on CIFAR10-DVS dataset, respectively. Because the LIF model is employed in this study, we show the former result with 63.53% accuracy in Table 2 for direct comparison with our study. Meanwhile, we also refer the latter results of 70.40% in lines 242-243, with the desctiption of " Although the classification accuracy of SRNN-CBAM is lower than LIAF-Net [23], the adaptive memory invoking should be emphasized for the further application of spatiotemporal pattern processing." to show the advantages and disadvantage of the proposed RSNN to illustrate the whole view of our study. We would rewrite the corresponding description to make it clear, by adding both the results of LIF-NET and LIAE-NET in [23] into Table 2 and supplement the description of "Our proposed SRNN-CBAM method performs better than the LIF-NET model in [23]." **W2:**"Spiking ConvLSTM is more suitable for removing facial expression data"? **A:**Here It means that the Spiking ConvLSTM is more suitable for spike-based coarse-grained data instead of the fine-grained images such as the facial expression data. In detail, on DVSGesture dataset, each sample is captured by the dynamic vision sensor (DVS), and the facial expression during finishing each gesture process is removed because of the event-driven response property of DVS. Hence, the Spiking ConvLSTM is more suitable to the DVS data instead of the pixel-level fine-grained images. Thanks for pointing that, we would change the description to "Spiking ConvLSTM is more suitable for processing the dynamic spatiotemporal patterns, such as the gesture recognition with the clear gesture moving trajectories but without preserving the details of fine-grained images with high resolution pixel. " **W3 and W4:**"comparison with [R-1] and [R-2]" ? "How does the potential threshold of LIF impact the performance of the proposed methods?" **A:**: Thanks for your suggestion. Here we conducted ablation experiments about the dynamic threshold during training on the DVSGesture dataset, although we could not compare the spiking Transformer in [R-2] for its object tracking framework can not directly applied into our application scene. Two different dynamic threshold methods in SNNs with LIFNode and MultiStepLIFNode are employed in our RSNN-SCBAM model. We make the thresholds of MultiStepLIFNode and LIFNode trainable parameters and conducted experiments to validate their impact. The results are as follows: | Method | Node_ threshold | Acc (%)| |---------------|-------------|-------------| | RSNN-SCBAM | LIFNode | 91.3 | | RSNN-SCBAM | MultiStepLIFNode | 89.9 | However, the current results do not show any significant improvements with the dynamic threshold. This threshold adjustment could make the model relatively intricate, involving an increased number of parameters and non-linear operations. If the model's parameters or network structure are not adjusted correctly, it might lead to suboptimal model performance. Additionally, SNNs might demand a larger amount of training data and an extended training time for effective learning and optimization. [1] Dayan P, Abbott L. Computational neuroscience: Theoretical neuroscience: Computational and mathematical modeling of neural systems[M]. Cambridge: MIT Press, 2001: 162-166. [2] Brunel N, Latham P E. Firing rate of the noisy quadratic integrate-and-fire neuron[J]. Neural Computation, 2003, 15(10): 2281-2306. [3] Pellegrini T, Zimmer R, Masquelier T. Low-activity supervised convolutional spiking neural networks applied to speech commands recognition[C]//2021 IEEE Spoken Language Technology Workshop (SLT). IEEE, 2021: 97-103. **W5:**"How does the number steps impact the performance of the proposed methods?" **A:**Regarding the variation in LSTM time steps, we conduct experiments using additional time step values of 10, 15, and 25 on the DVSGesture dataset. As shown in the following table, the results shows that the time step of 20 is the optimal choice. This could be attributed to the nature of the DVSGesture data, where gestures are repeated by individuals at specific time intervals. The accuracis of the RSNN-CSAM increase first and then reduce as the growth of the time steps from 10 to 25. Different time steps capture varying temporal features, leading to differences in test results. Less time steps could cause the excessive concentration of information and then could not extract enough event feature, while the larger time steps may extract superfluous information and could not capture the key information. | Method | Time step | Accuracy (%) | |---------------|-------------|-------------| | RSNN-SCBAM | 10 | 89.2 | | RSNN-SCBAM | 15 | 92.4 | | RSNN-SCBAM | 25 | 90.3 |
Summary: This article proposes a spike recurrent neural network model with a spiking convolutional block attention mechanism, called SRNN-SCBAM. Its main idea is to adaptively call historical information in the spatio-temporal features of the spatio-temporal pattern, which has advantages in efficient memory calls and eliminating historical redundancy. The experiment on the DVS128 gesture dataset is conducted to verify the effectiveness of the model in utilizing adaptive historical information. The idea of the article is clear, the textual description is clear, and examples are provided to illustrate results. Strengths: 1.The paper is easy to read, and generally well written. 2.The proposed model of RSNN is simple but effective, the motivation is clear. 3.Experimental evaluations were conducted on two different neuromorphic datasets to demonstrate the performance improvement with the proposed SCBAM under different settings. Weaknesses: There is a lack of explanation for some parameter settings. In addition, there are some shortcomings in the experimental aspect of the paper. The experimental results mentioned in the article on the Cifar10-DVS generally can be supplemented with appropriate visualization results. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1.The parameter setting about the time constant in the first paragraph of section 2.4 should be explained in detail. 2.In addition, there are some shortcomings in the experimental aspect of the paper. The visualization experimental results mentioned in the article on the Cifar10-DVS can be supplemented with appropriate experiments. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: 1.The parameter should be explained further. 2.More visualization results could be added. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 9YjJ, We really appreciate your insightful comments and feedback. We addressed your questions below. We also revised our paper accordingly. **Q1:** "The parameter setting." **A:**Thanks for the suggestion. All the parameter settings are listed in the following table. Together with the description of section 4.1, the following parameters' setting will be added to the revision: The integration time constants $\tau$ of LIF neurons are set to be 4.0. $\alpha$ is the parameter that controls the smoothness of the gradient during backpropagation for surrogate gradient learning process, which is set to be 4.0 in this paper. $u_{reset}$ is the reset voltage of the neuron and set to be 0. The above content would be added to the revision. | Parameters | DVS Gesture | Cifar10-DVS | |---------------|-------------|-------------| | Epoch | 200 | 1024 | | Batchsize | 32 | 32 | | LIF, $ au$ | 2.0 | 2.0 | | T | 20 | 20 | | $\alpha$ | 4.0 | 4.0 | | $u_{reset}$ | 0 | 0 | **Q2:**"the visualization results on CIFAR10-DVS". **A:**Thanks for the suggestion, we have supplemented appropriate visual results on CIFAR10-DVS dataset, as shown in the one-page PDF rebuttal file. In detail, we visualize the extracted features at several time stps of the proposed RSNNs with and without SCBAM. It shows that the RSNN module with SCBAM captures information as anticipated, exhibiting strong sparsity across both temporal and spatial dimensions throughout the entire time steps while the RSNN module without SCBAM only extracts limited features which may lose a significant amount of crucial information. Thanks again for all of your valuable feedback, suggestions, and questions.
Rebuttal 1: Rebuttal: We thank the reviewers for their helpful feedback. We are inspired by the fact that they have found our motivation clear [Reviewer 9YjJ], reasonable [Reviewer TiEP] and our proposed approach simple but effective [Reviewer 9YjJ] and novel [Reviewer NNKj]. We appreciate that [Reviewer TiEP] expresses agreement, stating, "This would enable the selective activation of historical information and the elimination of superfluous history during the training phase. This will provide the iterative calculation with a favorable initial state." We are delighted that everyone agrees our experimental evaluation is clear, and the proposed model is highly suitable for handling spatiotemporal data. We are also pleased that the reviewers found the paper to be well-presented and well-organized [Reviewer 9YjJ], and the ideas presented to be intriguing [Reviewer TiEP]. Here, we emphasize the contributions in this paper: We introduce the Spiking Recurrent Neural Networks model with the Spiking Convolutional Block Attention Module (SRNN-SCBAM), aiming to leverage both spatial and temporal features of spatio-temporal patterns effectively. The SRNN-SCBAM model utilizes the spiking CBAM component to adaptively incorporate historical information from spatial and temporal channels, thereby benefiting from efficient memory retrieval and eliminating redundancy in historical data. To validate the model's efficacy, extensive experiments are conducted on CIFAR10-DVS and DVS128-Gesture datasets. The ablation study and comparison results demonstrate that our proposed model achieves competitive performance among the existing RSNN models. Thanks to the reviews’ suggestions on experimental description [Reviewer 9YjJ], experimental settings [Reviewer KuzU, Reviewer TiEP] and biological plausibility [Reviewer TiEP]. We have noticed the flaws in the origin description from the constructive feedback from the reviewers and have prepared an updated version of the manuscripts. Here we will roughly summarize the changes made. For a more detailed reply and explanation, please refer to the individual responses below. **About the nolvety:** As [Reviewer TiEP] expresses agreement, stating, "Well-thought-through reuse of standard modeling and algorithmic components to solve the memory calling and history redundancy elimination in recurrent SNNs". We think our study is not the simple combinnation of the existing methods within the same paradigm. **The comparison with SNNs with learnable threshold**We added the experimental results with the dynamic thresholds of MultiStepLIFNode and LIFNode trainable. **The parameter settings**We explained all the parameter settings and the network structures. **More visualization results**We added the visualization results in the one-page PDF rebuttal file. We will incorporate all feedback and correct all typos and incorporate suggestions. We appreciate the reviewer’s thoughtful comments and insights regarding our article proposing the SRNN-SCBAM model. We would like to address the points raised and provide further clarification on certain aspects. Thanks again for all reviewers, authors of paper 2180. Pdf: /pdf/f4155ec136bb6eb9ca08067ebf697ec9e5295b90.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Optimal Unbiased Randomizers for Regression with Label Differential Privacy
Accept (poster)
Summary: This paper investigates the bias of the state-of-the-art label-differential privacy (label-DP) mechanism proposed by Ghazi et al. (2023) and proposes bias-corrected randomizers achieved through minimizing of a constrained linear programming approach. The proposed label-DP mechanisms demonstrate lower mean squared errors empirically by carefully tuning the hyperparameters in deep learning. Strengths: 1. The main result of this paper is intriguing and original. The paper provides a compelling example illustrating the bias present in the state-of-the-art label-DP mechanism and proposes a new mechanism to address this bias. Additionally, the authors thoroughly investigate the empirical performance of the proposed algorithm and show that it outperforms the state-of-the-art. 2. This paper is well-written. The authors effectively communicate their contributions and main findings in a clear and concise manner. Furthermore, the organization of the paper is well-structured. Weaknesses: 1. The proposed algorithm introduces extra computational cost. In order to obtain the unbiased randomizer, a constrained linear programming (LP) problem needs to be solved, which introduces extra computational cost. Although the authors demonstrate the feasibility of the LP, the term "feasible" left me uncertain, and it would be more informative to explicitly state the computational complexity involved. 2. The authors should consider the potential privacy implications of hyperparameter tuning. In the experimental section, they mention the careful tuning of hyperparameters. However, if the validation set used for this tuning procedure is not label DP, there is a risk of privacy leakage. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See the weakness part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for thoughtful comments and questions. We include below the responses to the questions. > Computational cost and “feasibility” Firstly, we clarify that “a linear program (LP) is feasible” (e.g. in Prop. 7) means that the feasible set, a.k.a. the solution space of the LP is non-empty. This is important to show because otherwise, the method is not guaranteed to return a valid $\varepsilon$-DP mechanism. (Same response as to Reviewer Vhiv) While both the optimal biased and optimal unbiased randomizers can each be written as a solution to an LP, the solution to the former has a special structure that admits a dynamic programming algorithm (RR-on-Bins from prior work). However, no similar property seems to hold for the solution to the latter; hence explicitly solving the LP is the only available option now. Fortunately, even though solving the LP can be expensive, it is a “one-time” computation in our setting; the training time typically largely dominates the time taken to solve the LP. In addition, the LP also has a “knob”, namely mesh size, that can be used to tradeoff the LP computation and the utility of the unbiased algorithm. The following Table shows the running time of the LP for the unbiased randomizer, the noisy label loss and the final test loss, for different mesh sizes (parameter $n$ in Algorithm 2) for $\varepsilon = 1$ on the US Census dataset we study (prediction of number of weeks worked in $\{1, \ldots, 52\}$). We note the following: * Even though the wall clock time for computing the optimal unbiased randomizer is significantly larger than that of the optimal biased one, this is still orders of magnitude smaller than the ML model training. * The test loss is quite similar for various mesh sizes, even though the noisy label loss does improve slightly with finer discretization. This suggests that the unbiasedness was key to the improvements over RR-on-Bins and the discretization of the output set does not affect performance as much. | **Mechanism** | **Mesh size** | **Computing mechanism wall-clock time** | **Noisy label loss** | **Test loss** | |---|---|---|---|---| | RR-on-Bins | n/a | 0.154 s | 79.71 | 172.44 | | Optimal Unbiased Randomizer | 52 | 2.38 s | 1288.21 | 134.44 | | Optimal Unbiased Randomizer | 416 | 17.1 s | 1275.22 | 134.43 | | Optimal Unbiased Randomizer | 1664 | 161 s | 1274.71 | 134.43 | > Privacy implications of hyperparameter tuning We thank the reviewer for raising this important point. Indeed, hyperparameter tuning in general has additional privacy costs, and how to tune hyperparameters privately and efficiently is an active research topic [1, 2]. Consequently, it is common in the private ML literature to separate the question of private hyperparameter tuning and private training, and focus on comparing the privacy-utility trade-off under optimal hyperparameters [e.g., 3, 4, 5, 6]. In this paper, we follow this convention. **References:**\ [1] Nicolas Papernot and Thomas Steinke. Hyperparameter tuning with Renyi differential privacy, 2021.\ [2] Sander, Tom, Pierre Stock, and Alexandre Sablayrolles. Tan without a burn: Scaling laws of DP-SGD, 2023.\ [3] Malek, Mani, et al. Antipodes of label differential privacy: PATE and ALIBI, 2021.\ [4] He, J., Li, X., Yu, D., Zhang, H., Kulkarni, J., Lee, Y. T., Backurs, A., Yu, N., & Bian, J. Exploring the Limits of Differentially Private Deep Learning with Group-wise Clipping, 2022.\ [5] Kurakin, A., Song, S., Chien, S., Geambasu, R., Terzis, A., & Thakurta, A. Toward Training at ImageNet Scale with Differential Privacy, 2022.\ [6] De, S., Berrada, L., Hayes, J., Smith, S. L., & Balle, B. Unlocking High-Accuracy Differentially Private Image Classification through Scale, 2022. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. I would like to keep my score and suggest acceptance.
Summary: This paper proposes a differentially private algorithm for regression problems. The algorithm protects the privacy of the labels ("label DP"), in contrast to the entire example. The canonical application of this is digital advertising, where the label might be transaction data from a separate website. Furthermore, the algorithm operates in what [GKK+23] term the "feature-oblivious" model, where a private algorithm, operating solely on the labels, sends a message to the "features party," who then uses these together to learn a model. [GKK+23] studied a label randomizer that aims to minimize the difference between the true and noisy labels. This work studies a unbiased randomizers. In addition to experiments, the paper provides some theoretical evidence that unbiased estimators, even if they have high variance, may be superior. The private algorithm (which, again, only has access to the labels) proceeds in two steps. First, it uses a private histogram to estimate a prior over the labels. It then solves a linear program, the output of which is our label randomizer. The LP minimizes the expected difference between the true and noisy labels, subject to privacy, unbiasedness, and normalization constraints. An important technical question is the randomizer's output space. Let $\mathcal{Y}\subseteq \mathbb{R}$ be the label space and $\hat{\mathcal{Y}}\subseteq \mathbb{R}$ the set of possible outputs of the randomizer. The solution to the LP has $|\mathcal{Y}|\times|\hat{\mathcal{Y}}|$ entries, representing a probability distribution over $\hat{\mathcal{Y}}$ for each entry in $\mathcal{Y}$. Thus it is clear that this approach demands both sets be finite (and of modest size, to solve the LP). What is less clear, due to the unbiasedness constraint, is what $\hat{\mathcal{Y}}$ should be. Note that if $\min \mathcal{Y} =0 $ and $\hat{\mathcal{Y}}=\mathcal{Y}$, then unbiasedness demands we map $0\to 0$ with probability 1, violating privacy. Thus the endpoints of $\hat{\mathcal{Y}}$ must exceed those of $\mathcal{Y}$. The authors set the endpoints of $\hat{\mathcal{Y}}$ so that (provably) the LP is feasible. They "fill in" the rest of $\hat{\mathcal{Y}}$ with a grid, as finely spaced as their computational constraints allow. For a fixed $\hat{\mathcal{Y}}$ and prior on $\mathcal{Y}$, the LP finds the unbiased private randomizer with the lowest expected difference (or loss) in labels. Experiments demonstrate improvements over prior work on three data sets. I noticed that parts of the submission (introducing the label DP recipe, describing data sets, and reviewing related work) had substantial overlap with text from GKK+23. After discussion with my chair, I have not used this information in my evaluation of the paper. Strengths: Without unbiasedness constraints, GKK+23 find the best randomizer is of a form they call "RR-on-Bins." The authors nicely sum up their core innovation: "the addition of an unbiasedness constraint to the linear program leads to solutions to the LP that (i) are not RR-on-Bins solutions, (ii) can have substantially higher variance than RR-on-Bins, and (iii) nevertheless have a much lower train and test error due to the reduction in bias." This is a clear observation that allows them to move past prior work. I buy that this is a practical problem, that the feature-oblivious model is useful. Their approach does seem to significantly improve on that of GKK+23 on the data sets they tested. Their experimental results are complemented by their formal claims (Theorem 6 and Proposition 7) about the output label space of randomizers. With minor exceptions, I found the presentation very clear throughout. Weaknesses: In light of the work of GKK+23, this paper has limited novelty. Originality is also a serious weakness: GKK+23 point out that "our noising mechanism typically introduces a bias; mitigating it, say by adding unbiasedness constraints into the LP, is an interesting direction." The title says the work finds "optimal" unbiased randomizers, but that optimality only holds once the choice of output space is selected. Theorem 6 (there is an optimal unbiased randomizer with at most $2|\mathcal{Y}|$ labels) and Proposition 7 (selecting endpoints to ensure feasibility) are good steps, but do not get us all the way there. As far as I can tell, it remains possible that some other scheme generates better unbiased randomizers, at least for some priors. The paper's focus is on feature-oblivious algorithms, but I would have liked to see a comparison with other label-DP algorithms that solve this problem. How much are we giving up by moving to feature-oblivious? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Is there evidence (even informal) that this discretization approach is close to optimal? For instance, can we rule out that the LP's objective is highly sensitive to the choice of endpoints (in the regime considered)? Can you briefly sketch out what you feel are the key innovations beyond GKK+23? Estimating the prior requires some privacy budget for each bin. Do you expect that, as $|\mathcal{Y}|$ grows, another approach will perform better than yours? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: The algorithm operates in the feature-oblivious model; other label-DP algorithms might perform better in different settings. The algorithm assumes prior knowledge of $\mathcal{Y}$; it is not always clear where this information comes from. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for thoughtful comments and questions. We include below the responses to the questions. > key innovations beyond GKK+23 GKK+23 proposed an optimal (biased) randomizer (RR-on-Bins). While an unbiased mechanism was mentioned as a future direction in GKK+23, it was not clear how to realize this in practice. The key innovations of our paper beyond GKK+23 include a formulation and theoretical justification of an unbiased randomizer, a practical algorithm based on a heuristic discretization approach to the LP, a systematic evaluation with optimal hyperparameters and empirical results that substantially outperform the previous state-of-the-art. > discretization approach is close to optimal? Thank you for the important question. Firstly, Theorem 6 implies the output set is finite and hence some boundary exists. The way we constructed the heuristic for choosing the boundary points for the output set was by experimenting with various strategies, first ensuring feasibility of the LP, and then expanding it and stopping when we start seeing diminishing returns on the LP objective. We also empirically observe that the final mechanism is supported on values that are away from the boundaries we consider, so it seems likely that expanding the boundaries further does not change the optimal solution. The mesh size controls a trade-off between computation time for solving the LP and the value of noisy label loss. We note that it is possible to bound the optimality gap due to discretization in terms of Lipschitz constant of the loss and width of the mesh discretization interval (see Lemmas towards the end of this rebuttal). In practice however, (see the table in response to Reviewer Vhiv), we find that a smaller mesh size is already able to recover good test loss, especially for small $\varepsilon$ values, even though a finer discretization reduces the noisy label loss by a small amount. We leave it to future work for a more principled method to compute the optimal unbiased randomizer. In practice, we find this heuristic to be sufficient for extracting utility out of these unbiased randomizers. We will add more discussion on this in the revision. > Estimating the prior requires some privacy budget for each bin ... as |Y| grows, another approach will perform better than yours? Note that, in estimating the prior, the amount of noise added to each bin is sampled from $\mathrm{Lap}(2/\varepsilon_1)$, since changing one label can modify the counts for only two bins (reducing one and increasing the other). Thus, we do not need a privacy budget for each bin. > feature-oblivious algorithms ... comparison with other label-DP algorithms that solve this problem. How much are we giving up by moving to feature-oblivious? Thanks for the suggestion! While there are feature-aware LabelDP algorithms for classification problems, we are not aware of any existing feature-aware LabelDP algorithms for regression problems. In this paper, we focus on feature-oblivious algorithms for simplicity. But we note feature-awareness is an orthogonal component that can be easily added by extending our algorithm to use feature-aware priors instead of a global prior. One natural approach mentioned in the Conclusion section is to cluster the input features and build separate priors for each cluster. With this extension, the utility of our algorithm could potentially be improved -- but the trade-off between utility gain and implementation complexity will be heavily task and data dependent. We leave a systematic study to the future work. --- **Lemma:** Let $M$ be the optimal unbiased randomizer with output labels bounded in $[A, B]$. Let $M_d$ be the optimal unbiased mechanism with output labels in $\{A, A+d, A+2d, …, B-d, B\}$. Suppose for all input labels $x$, output labels $y \in [A, B]$ and $c \in [-d, d]$ such that $y+c \in [A,B]$ and $|\ell(x, y) - \ell(x, y+c)| < C_d$. Then the noisy label loss $\mathcal{G}(M_d; \mathcal{P}) \leq \mathcal{G}(M; \mathcal{P}) + C_d$. _Proof._ From $M$, we construct an unbiased mechanism supported on $\{A, A+d, A+2d, …, B-d, B\}$ by a postprocessing step called ‘unbiased rounding’. For an output label $k \in [A+nd, A+(n+1)d]$, any time we see $k$ as an output of $M$, we instead emit the value $A+nd$ with probability $(k - (A+nd)) / d$, and emit $A+n(d+1)$ with probability $(A+(n+1)d - k) / d$. This post processing step is unbiased, and therefore preserves the mechanism being unbiased. It is clearly supported on $\{A, A+d, A+2d, …, B-d, B\}$. Lastly, because $|\ell(x, y) - \ell(x, y+c)| < C_d$, unbiased rounding at the output value $k$ increases the noisy label loss at most by $\Pr(output=k)*C_d$. Summing over all possible output labels $k$, we get $\mathcal{G}(M_u; \mathcal{P}) \leq \mathcal{G}(M; \mathcal{P}) + C_d$ where $M_u$ is the mechanism created by performing unbiased rounding on $M$. Since $M_d$ is the optimal mechanism over a set that includes $M_u$, we get $\mathcal{G}(M_d; \mathcal{P}) - \mathcal{G}(M; \mathcal{P}) < C_d$. **Lemma:** Suppose $\ell(x,y)$ is $K$-Lipschitz for $x,y \in [A, B]$. Then $C_d < Kd$ in the lemma above. _Proof._ By the $K$-Lipschitzness of $\ell$, $|\ell(x, y) - \ell(x, y+c)| < K|c|$, and $c \in [-d, d]$, giving the lemma. Note that for the case of $\ell(x,y) = \frac{1}{2} (x-y)^2$, $\ell$ is $(B-A)$-Lipschitz. These two lemmas taken together show that as $d$ goes to $0$, that is, the mesh gets finer, the noisy label loss of the unbiased mechanism obtained from the LP approaches the optimal noisy label loss, and gives a bound on the excess loss in terms of $d$. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. Your elaboration on the discretization makes me feel much better about the approach. I will increase my score to a 7. On estimating the prior: of course you are correct, apologies for the mistake. If $|\mathcal{Y}|$ is large relative to the number of samples, our estimate of the prior might be poor. I doubt this is of much concern; as you mention elsewhere, a different discretization or grouping of labels would likely work well. (No need to reply to this comment.)
Summary: The paper studies the regression problems under label differential privacy (DP). It proposes a novel randomizer that generates high-quality DP labels which can be used to train a regressor. The proposed randomized mechanism is sound and the experiments on various benchmark datasets and different privacy budgets demonstrated the improvements over the baselines. Overall I think the contribution of the paper is significant, but the proposed mechanism seems to be limited to the case where the labels are discrete (not continuous). Based on that I recommend a weak accept for this paper. Strengths: 1. The contribution is significant and the idea is novel. The paper proposes a novel randomized mechanism that can generate unbiased DP labels which helps to train an unbiased regressor. 2. The paper conducts extensive experiments on different benchmark datasets and different epsilon and shows that a noticeable reduction of mean squared error when using the proposed mechanism over the previous works. Weaknesses: 1. The paper focuses on optimal mechanism for epsilon-DP and this topic is pretty studied in some previous work such as the stair-case mechanism. It would be great if there is any discussion about extension to (epsilon, delta)-DP. 2. The settings are limited to the discrete (but not continuous) choice of labels which only applies for ordinal regression. 3. The authors did not attach the codes so it might be hard to reproduce the experiments. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can the authors explain more why the proposed mechanism is better than the optimal stair-case mechanism for example if we look at Figure 4.a , especially even when epsilon is large? I used the staircase mechanism before and this turned out to be better than many baseline mechanisms when epsilon is large enough. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper discusses clearly the limitations of their proposed mechanism compared to the baselines in the Conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for thoughtful comments and questions. We include below the responses to the questions. > Comparison of “optimal unbiased randomizers” to the staircase mechanism. Indeed, the staircase mechanism was introduced as the optimal noise mechanism minimizing the _worst case error_, namely the error for any true value, and moreover the domain of the true value is unbounded, namely all of $\mathbb{R}$. Our optimization problem is different in that we are optimizing for the _average_ squared loss between the noisy and true labels and we assume that our domain is bounded. Note that the staircase mechanism is usually applied in the central model of DP, where the noise is added to an aggregate value (which can be unbounded, but has bounded sensitivity), whereas, in our setting, we are applying the staircase mechanism in the local model of DP. > It would be great if there is any discussion about extension to (epsilon, delta)-DP. For our specific approach, it does not seem that relaxing to $(\varepsilon, \delta)$-DP will be beneficial. For the first stage of estimating the histogram privately, one could potentially use an approximate- DP mechanism, but we believe that is not likely to change the prior significantly. For the second stage of randomizing labels, it is known that approximate-DP may not be helpful in the local model [1]. Thank you for raising this question; we will add a discussion about this in the Conclusions section. > The settings are limited to the discrete (but not continuous) choice of labels which only applies for ordinal regression. We can apply our techniques even to the continuous case by discretizing the domain (such as in the experiments with the Criteo Sponsored Search Conversion Log Dataset, where the `SalesAmountInEuro` field was an arbitrary floating point number). There is indeed some utility loss due to the discretization, but it should not be significant, as it can be upper bounded in terms of Lipschitz constant of the loss and the width of the discretization interval. Moreover, the rounding can be done in a randomized manner that preserves the expectation (e.g., 0.3 rounds to 1 w.p. 0.3 and to 0 w.p. 0.7), so the final noisy label as a result of rounding and applying the randomizer is unbiased. (Please see the Lemmas in response to Reviewer YHYY for a similar argument about discretization of the output labels.) **References**\ [1] Bun, Nelson, Stemmer. Heavy Hitters and the Structure of Local Privacy, PODS 2018.
Summary: This paper proposes a new family of DP label randomizers for regression models. They show that these randomizers improve the MSE of the training set at the expense of the noisy label loss, indicating an alternate bias-variance tradeoff to other similar works. Strengths: The strength of this paper is the theoretical and empirical evidence of the randomizer. It is evident that it outperforms the unbiased randomizers by significant margins in the test error, but this comes at the cost of the noisy label loss. Weaknesses: However, it would have been helpful to include what this tradeoff means practically and its implications. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Overall, I think the paper is well-written and has enough experiments to validate its conjectures. The novelty of the paper is in its theoretical results and improved guarantees. Some points/comments that could be improved are: $\hat{y}$, the noisy label, is never defined in Section 3? The end of Section 3, particularly the observations for $\epsilon$ are a bit unwieldy to read -- this would probably be better presented with a table or figure Why is the computational complexity not compared between the unbiased and biased randomizers? This would help understand what tradeoffs of this method are It would be nice to see an analysis of why the method works better on one dataset versus the other In Section 4.3, it is mentioned that for smaller $\epsilon$’s one can see the standard error increase -- is this only true for these types of algorithms or simpler models as well? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and useful suggestions. We will add the definition of $\hat{y}$ and incorporate other suggested changes in a future revision. > Trade-off between noisy label loss and test loss Our main goal in the paper is to minimize the final test loss. The noisy label is only an intermediate metric of interest. Note that the noisy label loss decomposes as bias and variance. The main observation of this paper is that _reducing the bias, even at the cost of blowing up the variance (by orders of magnitude) is beneficial for the end goal of minimizing the test loss_. This seems to be consistent with our experiments, where for mechanisms without bias (unclipped Laplace & staircase mechanisms, and our optimal unbiased randomizer), we find that reducing the variance corresponds to reducing the test loss as well. > Computational complexity (Same response as to Reviewer oAvy) While both the optimal biased and optimal unbiased randomizers can each be written as a solution to an LP, the solution to the former has a special structure that admits a dynamic programming algorithm (RR-on-Bins from prior work). However, no similar property seems to hold for the solution to the latter; hence explicitly solving the LP is the only available option now. Fortunately, even though solving the LP can be expensive, it is a “one-time” computation in our setting; and the training time typically largely dominates the time taken to solve the LP. In addition, the LP also has a “knob”, namely mesh size, that can be used to tradeoff the LP computation and the utility of the unbiased algorithm. The following Table shows the running time of the LP for the unbiased randomizer, the noisy label loss and the final test loss, for different mesh sizes (parameter $n$ in Algorithm 2) for $\varepsilon = 1$ on the US Census dataset we study (prediction of number of weeks worked in $\{1, \ldots, 52\}$). We note the following: * Even though the wall clock time for computing the optimal unbiased randomizer is significantly larger than that of the optimal biased one, this is still orders of magnitude smaller than the ML model training. * The test loss is quite similar for various mesh sizes, even though the noisy label loss does improve slightly with finer discretization. This suggests that the unbiasedness was key to the improvements over RR-on-Bins and the discretization of the output set does not affect performance as much. | **Mechanism** | **Mesh size** | **Computing mechanism wall-clock time** | **Noisy label loss** | **Test loss** | |---|---|---|---|---| | RR-on-Bins | n/a | 0.154 s | 79.71 | 172.44 | | Optimal Unbiased Randomizer | 52 | 2.38 s | 1288.21 | 134.44 | | Optimal Unbiased Randomizer | 416 | 17.1 s | 1275.22 | 134.43 | | Optimal Unbiased Randomizer | 1664 | 161 s | 1274.71 | 134.43 | > In Section 4.3, it is mentioned that for smaller $\varepsilon$’s one can see the standard error increase -- is this only true for these types of algorithms or simpler models as well? We conjecture that this might be data dependent (e.g., specific to the AppAds dataset). Note that the increase in standard error bars is evident even for other unbiased mechanisms such as Laplace and Staircase mechanisms as well.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Spectral Entry-wise Matrix Estimation for Low-Rank Reinforcement Learning
Accept (poster)
Summary: This paper presents a detailed analysis of matrix estimation problems in low-rank bandit and low-rank RL scenarios. The authors investigate the effectiveness of spectral-based matrix estimation methods and demonstrate their ability to accurately recover the singular subspaces of the matrix with minimal entry-wise error. Building upon these findings, the paper introduces a regret minimization algorithm for low-rank bandit problems and a best policy identification algorithm for reward-free RL in low-rank MDPs. Strengths: 1. The paper tackles the challenging task of deriving the entry-wise error of low-rank matrix estimation, specifically focusing on the correlated noise case. This analysis extends existing results and enables the study of bandit and RL settings. The techniques employed, such as leave-one-out arguments and Poisson approximation, demonstrate technical prowess. 2. Despite being a theoretical paper, the writing is clear and easy to comprehend, enhancing its accessibility. Weaknesses: 1. The paper fails to address the necessity of deriving entry-wise error in low-rank matrix estimation for bandit and RL settings. In the context of low-rank bandit problems, the primary objective is to identify the best entry, such as the largest value, in the matrix as quickly as possible. Consequently, the emphasis should be on selecting the arm with the highest reward, rather than achieving a small entry-wise error for the entire matrix. Entries with very small values do not necessitate extensive exploration. Thus, caring about entry-wise error may lead to unnecessary over-exploration, which is not desirable in low-rank bandit or low-rank MDP settings. 2. In low-rank bandit and MDP scenarios, the focus should primarily be on minimizing regret. Several existing approaches have achieved minimax regret bounds in these settings. For instance, Kang et al. (2022) addressed a more general low-rank matrix bandit problem and proved a minimax regret of $O((n+m)\sqrt{T})$. Jain and Pal (2023) also considered a similar low-rank matrix bandit problem and used a similar entry-wise matrix estimation result to achieve a regret bound of $polylog(n+m)\sqrt{T}$ for the rank-1 case. Unfortunately, the paper only discusses sub-optimal approaches in the main body, while the superior results of these references are mentioned in the last section of the supplementary materials. 3. The lack of numerical experiments to evaluate the proposed regret minimization algorithm and compare it against benchmark methods in the literature is a notable drawback. Merely comparing regret upper bounds does not provide a comprehensive understanding of the algorithms' actual performance. It would be more convincing to include extensive numerical experiments that directly compare the regret achieved by different algorithms, thereby justifying the usefulness of deriving entry-wise error of low-rank matrix estimation in bandit and RL settings. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Could you elaborate on how deriving the entry-wise error of low-rank matrix estimation is justified in bandit and RL settings, providing detailed explanations and extensive numerical experiments? 2. In the discussion of related work, it is crucial to compare the paper's results with state-of-the-art approaches and provide a fair and objective analysis. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No numerical experiments were provided. Flag For Ethics Review: ['Ethics review needed: Inadequate Data and Algorithm Evaluation'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and insightful review! Please find below our responses. *Answer to Weakness 1 and Question 1* **A. Necessity of entry-wise guarantees in low-rank bandits and RL.** Thank you for bringing this point to our attention. Indeed, we did not explicitly motivate the need for entry-wise matrix estimation guarantees in low-rank bandits and RL. We will revise our manuscript accordingly. Below, we argue why entry-wise guarantees in low rank bandits and RL are useful. **A.1.** The need for entry-wise matrix estimation guarantees arises naturally in our analysis. In low-rank bandits, in order to obtain logarithmic regret bounds, we need control of the estimated gaps. This in turn requires entry-wise guarantees (see Appendix G.1 - Line 1090). In reward-free RL, in order to obtain PAC-bounds, we need to control the value difference error. This in turn requires control of matrix estimation errors in the norm $\Vert \cdot \Vert_{1 \to \infty}$ (see Lemma 12, Appendix A.3). Controlling this error is akin to controlling the entry-wise error in terms of analysis (see Appendix F). **A.2.** Existing work on low-rank bandits showcases regret analysis that do not use entry-wise guarantees. However, these guarantees are only minimax while ours are gap-dependent, exhibit logarithmic scaling in the time horizon and even enjoy better minimax guarantees in some scenarios. Indeed, the work of Jun et al. (2019) (cited as [29] in our supplementary material) proposes a clever algorithm with a minimax regret guarantee scaling as $\tilde{O}((m+n)^{3/2} \sqrt{T})$. Their regret decomposition (Corollary 1 in [29]) suggests that both entry-wise guarantees and guarantees in Frobeinus norm yield the same results. The follow-up work of Kang et al. (2022) (cited as [32]) proposes an algorithm leveraging a novel estimator and attaining an improved minimax regret guarantees of order $\tilde{O}((m+n)\sqrt{T})$. In contrast, our work complements and, in some aspects, improves upon these results: **(i)** our analysis allows us to obtain logarithmic regret bounds which are gap-dependent $\tilde{O}((m+n)(\bar{\Delta}/\Delta_{\min}^2) \log^3(T))$. Both [29] and [32] do not provide gap-dependent guarantees with logarithmic regret; **(ii)** under the assumption that $\Delta_{\max}/\Delta_{\min} \le \xi$, our algorithm, SME-AE, achieves a minimax regret bound scaling as $\xi \sqrt{(m+n)T}\log(T)^2$. This is a better minimax guarantee than the ones presented in [29] and [32]; **(iii)** we also perform best-entry identification with provable guarantees (see Theorem 7). This problem remains at large under-explored. **A.3.** Existing work has also noted the importance and need of entry-wise guarantees but with limited success. Please refer the works of Bayati et al. (2022), Jain and Pal (2023), and Shah et al. (2020) (cited as [7], [25], and [48], respectively, and discussed in Section 6 and Appendix H.2). [7] is the closest to our setting. There, they use a row enhancement procedure to obtain a matrix estimation guarantee in the norm $\Vert \cdot \Vert_{2\to \infty}$ (see Proposition 1 in [7]). They implicitly assume that one of the dimensions, say $m$ for example, is $\Theta(1)$ (see Assumption 1 and the discussion before it in [7]), thus making their guarantee equivalent to an entry-wise guarantee. Hence, their regret bounds (e.g., see Theorem 1 in [7]) may suffer an extra dependence on $\sqrt{m}$ while ours do not. Please note that we do not claim that entry-wise guarantees are absolutely needed but we don't see how to obtain our guarantees without. **B. Exploration in low-rank bandits and RL and entry-wise guarantees.** The reviewer raises an exciting question which is that of optimal exploration under low rank structure. Unfortunately, this question is at large still open. Note however that our approach based on entry-wise guarantees already allows us to considerably trim the exploration process. Indeed, our algorithm, SME-AE, achieves a gap-dependent regret bounds of order $\widetilde{O}((m+n) (\bar{\Delta}/\Delta_{\min}) \log^3(T))$, thus showing that we do not scale with the number of arms $nm$. Also note that our algorithm, SME-AE, does not perform matrix estimation up to an arbitrary small entry-wise error but only up to the minimal gap $\Delta_{\min}$ which is necessary to obtain logarithmic regret. *Answer to Weakness 2 and Question 2.* **C. PAC guarantees in low-rank bandits and RL.** The reviewer commented "In low-rank bandit and MDP scenarios, the focus should primarily be on minimizing regret." We believe that the problem of pure exploration in low-rank bandits and RL (a.k.a. PAC-RL) is also important. In low rank bandits, we have in fact also provided regret guarantees in addition to guarantees on best-entry identification. We believe this is a strength of our work rather than a weakness. **D. On related work.** First, we wish to clarify that we ended up putting the work of Kang et al. (2022) (cited as [32]) and Jain and Pal (2023) (cited as [25]) in Appendix H simply because we didn't have enough space in the main body. Moreover, we clearly stated in the main body (see line 318) that further details are provided on the related work (including [25] and [32]) in Appendix H. However, we understand the reviewer's sentiment that all relevant work should be mentioned in the main, especially those with state-of-the-art results. We thank the reviewer for bringing this to our attention and we will revise our manuscript accordingly. *Answer to Weakness 3.* **E. Experimental results.** We think that our theoretical contributions are numerous (matrix estimation under correlated noise and entry-wise guarantees + algorithms with better performance guarantees in low-rank bandits and RL). But, we agree that adding numerical experiments would nicely complement our theoretical findings. We intend to include experiments when revising our paper. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed clarification. I have increased the score accordingly.
Summary: The authors investigate matrix estimation in reinforcement learning and bandit settings with low-rank structures. They demonstrate the effectiveness of spectral-based methods in recovering matrix subspaces and minimizing entry-wise error. This enables the development of efficient RL algorithms tailored for low-rank bandits and Markov Decision Processes (MDPs). This paper is solid, supported by robust results and technically intriguing proofs. It builds upon the recent trend of examining entry-wise error bounds for matrix completion problems and extends it to dynamic settings like RL and bandits. While the guarantees may not be surprising and the algorithms largely follow an explore-and-commit approach, the authors introduce an interesting technique using Compound Poisson Noise to transform a non-i.i.d. problem into an i.i.d. problem, which may have a larger implication for other applications. Strengths: - Theoretical Soundness - Compound Poisson Noise Reduction for MDP is interesting - Exploring the potential of entry-wise guarantees of matrix completion in a dynamic setting is in general interesting Weaknesses: While acknowledging the efforts invested in developing the theory, the novelty of the research appears limited given the well-established techniques of entry-wise recovery and leave-one-out. Moreover, the current algorithms are constrained to explore-and-commit and (pseudo) i.i.d. sampling schemas, compared to adaptive algorithms such as UCB, due to the limitations of the theoretical guarantees. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: How is the result for matrix completion with Poisson noise in this paper related to "Near-optimal entrywise anomaly detection for low-rank matrices with sub-exponential noise." In International Conference on Machine Learning, pp. 3154-3163. PMLR, 2021? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: I do not perceive any immediate potential negative social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful review, and constructive feedback! We address your comments below. *Response to Weaknesses* **A. About the novelty of our analysis and of our algorithms.** **A.1.** As far as we are aware, the leave-one-out argument has been so far limited to a matrix plus noise model with independent noise entries. We feel that extending it to dependent noise entries was non-trivial. Notably, we establish entry-wise matrix estimation guarantees under Markovian noise structures (see our results for Model II in Section 3.3. and Appendix B.3) **A.2.** Note that our algorithm for the low-rank bandit problem is adaptive because it uses an adaptive stopping rule to end the exploration phase (this contrasts with naive Explore-Then-Commit (ETC) algorithms). We actually explain at the beginning of Section 4 that a naive ETC algorithm would lead to poor regret guarantees. Our algorithm remains simple, yet it achieves the best regret guarantees obtained so far. *Response to Questions* **B. Comparison with Farias et al., "Near-optimal entrywise anomaly detection for low-rank matrices with sub-exponential noise", ICML 2021.** Thank you for pointing us to this relevant paper, referred to as [0] below. We will cite it and discuss its results in the revised version of our manuscript. [0] provides entry-wise matrix estimation guarantees for random matrices with entries that are independent (see Theorem 2 in [0]). In contrast, we provide entry-wise matrix estimation guarantees for matrices with dependent entries (See our Theorems 1, 2 and 3). Both [0] and our submission rely on the leave-one-out argument for random matrices with sub-exponential entries. In our case, we use this argument as a sub-step of our analysis after performing the Poisson approximation. However, we believe that, as explained below, our final results are richer, more precise and actually needed for our RL applications. **B.1.** We provide guarantees on subspace recovery in $\Vert \cdot \Vert_{2\to \infty}$ (See (i) Theorem 1, 2, and 3) and guarantees in $\Vert \cdot \Vert_{1\to \infty}$ (see Theorem 2, and 3). This type of guarantees are not provided in [0]. The subspace recovery guarantees are useful as a sub-step of our analysis and we believe they are of independent interest. The guarantees in $\Vert \cdot \Vert_{1\to \infty}$ are useful in reward-free RL. **B.2.** The entry-wise guarantees of [0] are only expressed in terms of the matrix dimensions $n$ and $m$, while our guarantees exhibit a dependence on the number of observations $T$, the dimensions $n$ and $m$, and the confidence level $\delta$. Having guarantees with an explicit dependence for all $T \ge 1$ and $\delta \in (0,1)$ is crucial in the design of our algorithm for low-rank bandits. **B.3.** At a more technical level, we cannot apply the proof of Theorem 2 from [0] to our setting unless the number of samples $T = \Omega(n\sqrt{n})$ in the homogeneous case, a far larger number compared to $T=\Omega( n\log n)$ in our setting. The authors of [0] require this condition in order to apply a well-known leave-one-out theorem (Lemma 6 in [0]) which requires $32 \kappa \max(\gamma,\phi(\gamma))\leq 1 $ where $\gamma$ and $\phi(\cdot)$ are quantities defined in the proof of Proposition 2 in [0]. Applied to our setting, this condition can be rewritten as $\sqrt{n \log (n)}(T\Vert M \Vert_{\infty} + 1)\kappa^2 \leq C T\Vert M \Vert_2$ (see the second assumption in Proposition 2 in [0]). In particular, for the homogeneous case, this condition requires $T = \Omega(n\sqrt{n})$ as mentioned above. Moreover, our proof uses tighter results (such as application of Bennett's inequality in our Lemma 29) which enables us to achieve very tight bounds (even at logarithmic scale). --- Rebuttal Comment 1.1: Comment: Thank you for clarifying my questions!
Summary: This paper provides a theoretical study of low-rank matrix estimation in three contexts: **Context 1**: in the case of standard matrix estimation with uniformly sampled entries, where the authors prove a sample complexity comparable to approximate recovery results of $\tilde{O}(m+n)$ (when the rank is $O(1)$), with the following two main differences compared to existing methods: a (makes the result easier). It is assumed the rank is known, and the function class restriction is based directly on an explicit low-rank assumption. b (makes the result much harder and stronger). the error is measured with the maximum norm rather than Frobenius norm or expected loss. Because of the general context of the paper which concerns mostly reinforcement learning (see other results below), these results are formulated as a reinforcement learning/bandit problem where the arms are arranged as pairs of integers and the matrix of rewards is assumed to be low-rank. However, note that independence of the noise components is required (cf. lines 689-693 in the supplementary). **Context 2.1** In this model, recovery guarantees are proved for estimating a low-rank transition matrix from observations of independent transitions. Here, the matrix must be square, with the dimension being equal to the number of states. **Context 2.2** In this more natural setting, we estimate the transition matrix from context 2.2 based only on full trajectories. Regret bounds proved in this case rely on the ergodicity of the Markov chain by splitting the expression of the average error over time into sums where the time steps are separated by a large number $\tau$, so that the distributions of the corresponding observations are approximately independent and distributed as the limiting distribution. Finally, the authors introduce a novel algorithm for performing reinforcement learning in a low-rank bandit setting. The algorithm works in epochs: at a given epoch, with each iteration providing a more and more refined estimate of a set of candidate arms. Whilst the set of candidate arms has more than 2 elements, the algorithm recommends a uniformly random arm, after the set of candidate arms degenerates to a single element, the algorithm keeps recommending that one. A bound is shown on the expected amount of time it takes to identify the best arm. This culminates in a gap-independent regret bound of $\tilde{O}(T^{1/2})$ for this case, which to the best of my knowledge, solves a highly significant open problem in the nascent theory of low-rank bandits. The proofs rely on an ingenious combination of existing ideas, with the most salient one being that of poison approximation: the matrix of observed rewards is approximated by a matrix with entries equal to $M_{i,j}T_i$ where $T_i$s are independent Poisson random variables. (this general technique has previously been introduced in [7)]. Then, concentration inequalities for such matrices are used in combination with bounds on the error in approximating the true matrix of observed rewards by this Poisson approximation. ==========Post-rebuttal======= My doubts have been adequately addressed in the rebuttal and I will keep my score of **strong accept**. I remain convinced this is an excellent paper. However, I do hope the authors will take the trouble to add more detailed derivations and also tone down the statements and claims of novelty over the lack of independence of the $E_{i,j}$. Indeed, since this is an impressive contribution that many readers will be interested in studying, it would be a great service to the community to make the paper as approachable as possible. ======== **References** [1] Nathan Srebro, "Rank, Trace-Norm and Max-Norm", COLT 2005. [2] Ohad Shamir, Shai Shalev-Shwartz. Matrix Completion with the Trace Norm: Learning, Bounding, and Transducing. JMLR 2014. [3] Yuxin Chen, Yuejie Chi, Jianqing Fan, Cong Ma, Yuling Yan. Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization. SIAM journal of Optimization, 2020. [4] Prateek Jain, Soumyabrata Pal Online Low Rank Matrix Completion. ICLR 2023 [5] Sahand Negahban, Martin J. Wainwright. Restricted Strong Convexity and Weighted Matrix Completion: Optimal Bounds with Noise. JMLR 2012. [6] Mitzenmacher, Michael and Upfal, Eli. Probability and computing: Randomization and probabilistic techniques in algorithms and data analysis. Cambridge UNiversity Press 2017. [7] Yuxin Chen, Yujie Chi, Jianqing Fan, Cong Ma et al. Spectral Methods for data science: a statistical perspective. Strengths: **1** This is an excellent contribution to the field: although I am not completely sure about the originality (especially that of theorem 1 as compared to other results in [5]) since I am not fully up to speed on the reinforcement learning theory, I think there is a good chance this is groundbreaking work. The algorithm of successive arm elimination is simple and elegant, with a nearly optimal regret bound. Whilst it is true that the fact that explicit rank restriction is used (as opposed to, for instance, nuclear norm regularization) makes the matrix completion results seem a little trivial, the regret bound in the RL setting seems to be new and is a natural starting point to kick-start this branch of research. **2** Aside from very minor things (mostly related to the contextualization of the work within the related works), this paper is excellently written. The proofs are very well organized, to an extent which is demonstrably superior to the majority of accepted papers I have reviewed. The authors clearly organize the proof into steps (Poisson approximation, dilation trick, general concentration inequalities for Poisson matrices, specific calculations for the Poisson approximation to this problem, bounds on the error introduced by the approximation). Each parts of the proofs contain references to existing literature where relevant. Weaknesses: **1** One thing that threw me off a bit was the lack of contextualization and explanation of the first results (Theorem 1). In particular, on lines 90 to 92 (see also lines 125 to 130), it seems like the authors want to distance their results from standard matrix completion results by hinting that noise components are not independent (they are not stated to be independent at that point), when in fact it also seems this setting is exactly matrix completion with an explicit rank restriction. Indeed, lines 689-693 in the supplementary appear to suggest independence **is** required. It would be quite appropriate to compare to existing results in this case as well. For a bounded loss, a similar sample complexity goes all the way back to [1] for approximate recovery with a bounded loss. For connected problems with the trace norm, we also have the result of [2] and the extremely impressive result of [3], which is much stronger than the results presented here due to improved dependence on the variance of the noise. Similarly, I am also a bit confused by the statement on line 334 that "we do not assume the transition matrices are constructed based on a given restricted class of functions" since it appears there is a low rank assumption in the work. **2** Similarly, I feel like the related works on low-rank bandits missed some of the recent literature [4]. **3** In the low rank bandit result, the algorithm eventually recommends the same arm, that is, the same entry, all the time. This is highly restrictive in a recommender systems context: the algorithm isn't even able to identify the best item for each user, only the best user-item combination... **References** [1] Nathan Srebro, "Rank, Trace-Norm and Max-Norm", COLT 2005. [2] Ohad Shamir, Shai Shalev-Shwartz. Matrix Completion with the Trace Norm: Learning, Bounding, and Transducing. JMLR 2014. [3] Yuxin Chen, Yuejie Chi, Jianqing Fan, Cong Ma, Yuling Yan. Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization. SIAM journal of Optimization, 2020. [4] Prateek Jain, Soumyabrata Pal Online Low Rank Matrix Completion. ICLR 2023 [5] Sahand Negahban, Martin J. Wainwright. Restricted Strong Convexity and Weighted Matrix Completion: Optimal Bounds with Noise. JMLR 2012. [6] Mitzenmacher, Michael and Upfal, Eli. Probability and computing: Randomization and probabilistic techniques in algorithms and data analysis. Cambridge UNiversity Press 2017. [7] Yuxin Chen, Yujie Chi, Jianqing Fan, Cong Ma et al. Spectral Methods for data science: a statistical perspective. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: **1** Could you clarify what you mean in lines 90 to 92 and lines 125 to 130 regarding the independence of the noise? Although I can find such an example in the literature, it is seems unlikely to me that this result (corollary 1) is new. Is there any similar result for matrix completion with the maximum absolute value of the error as the performance measure? Perhaps in [5]? **2** Could you clarify to what extent theorem 1 and its proof differs from the existing results and proofs from [7] (what is the most similar result to Theorem 1 which was proved in [7])? **3** ( I would really like to have the answer to this!) In lines 957 to 958, could you explain clearly ho you arrive at the equation on the bottom of page 33? I am a little stuck there, I need far more details than what is written as it seems you are using another equation from the same page without stating it. Also, did you really mean $\sigma_{2r}(M)$ in the denominator or did you mean $\sigma_r(M)$ instead (which is the quantity mentioned in the textual explanation on the line above)? **4** Could you give a few more details in the page 32? It feels the reference to Lemma 4.16 in other work breaks the reader's momentum, it would be nice to write down the lemma again in the paper. Indeed, I think it is a key step in the argument that allows you to obtain this fascinating result with the max norm (theorem 1). **5** Line 1098, second equation, do you mean $\ell+2$ instead of $\ell-2$ in the exponent **6** In the second to last equation on line 1114 (page 42) should it not be $\lceil \log_2(1/\Delta_{min})\rceil$ or $\log_2(e/\Delta_{min})$ instead of $\log_2(1/\Delta_{min})$? Again, I see that you are not being too careful about constants in general but there is no $O$ notation in this particular inequality. **7** In line 90, you say that the entries are sampled "uniformly at random for simplicity", do you in fact require this assumption in the proof of the theorem? (based on existing results for matrix completion with explicit low-rank assumptions, it would seem that this is not required. In general, it would be nice to add a bit more details to each of the theorem statements to clarify the assumptions. **8** Is Lemma 25 known? It seems very general and standard, I feel like it is probably in some well known book. **Minor comments/typos** A citation would be nice on lines 226-227 It could be nice to reorganize sections C1 and C2.2 a little bit. For instance, in line 707, you are using a lemma which comes from a subsection that concerns the Markov problem when proving a result about Model 1, it would be nice to make it clear that Lemma 22 doesn't, in fact, require any proof elements from the context of Model 2. In the same vein, I think it would be nice, for completeness, to write down the proof of Lemma 22, even if it is very similar to the proof of Theorem 5.7 in [6]. It is a bit funny that the proof of Lemma 21 is there but not that of Lemma 22 since Lemma 21 is pretty much trivial whilst Lemma 22 is a bit more advanced. Making the paper fully self contained would be a nice way to get even more citations by further improving the encylopaedic quality of the appendix. Line 1098 (page 41), second equation, I think you mean $4e$ not $2\sqrt{e}$, though I understand you are not being too bothered about constants given the final results are in $O$ notation In the last inequality of the sequence of inequalities on page 43, and in line 1124 I think it would be nice to remind the reader of the definition of $\bar{\Delta}$ from line 239, since it is instrumental in deriving the equality on line 1124, which unfortunately relies on the fact that the recommendations are uniformly at random in $[m]\times [n]$ instead of the more natural $\mathcal{A}_{\ell}$. Typos: Line 951: missing "the" Line 706: extra square bracket inside the equation. In line 727, I recommend adding more details for ho you prove that $\tau(\epsilon)\leq \tau$, it takes quite a bit of massaging of previous equations to get this with the correct constant! Line 741: be joint probability==> be the joint probability line 742 "for Poisson and multinomial model ==> "for the Poisson and multinomial models" line 793 (proof of Lemma 25): "variables" ==> "variable" line 957 add "following" before "condition Line 980, Model\ref{10} should probably be Model\eqref{10} line 1097 (page 41) You say " in order for the above guarantee to hold we require the following two conditions to hold. I think that is not technically true: the conditions below are what is needed to apply the specific previously established results, so technically those are sufficient rather than necessary conditions. Line 1114 (page 42): I think there was an error with copy and paste: "rounds" should be after $\tau$ and before the first, not the second comma of the line. In the first equality after line 1122, using \left and \right might be nicer. [1] Nathan Srebro, "Rank, Trace-Norm and Max-Norm", COLT 2005. [2] Ohad Shamir, Shai Shalev-Shwartz. Matrix Completion with the Trace Norm: Learning, Bounding, and Transducing. JMLR 2014. [3] Yuxin Chen, Yuejie Chi, Jianqing Fan, Cong Ma, Yuling Yan. Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization. SIAM journal of Optimization, 2020. [4] Prateek Jain, Soumyabrata Pal Online Low Rank Matrix Completion. ICLR 2023 [5] Sahand Negahban, Martin J. Wainwright. Restricted Strong Convexity and Weighted Matrix Completion: Optimal Bounds with Noise. JMLR 2012. [6] Mitzenmacher, Michael and Upfal, Eli. Probability and computing: Randomization and probabilistic techniques in algorithms and data analysis. Cambridge UNiversity Press 2017. [7] Yuxin Chen, Yujie Chi, Jianqing Fan, Cong Ma et al. Spectral Methods for data science: a statistical perspective. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: See "weaknesses", especially Weakness 3. In addition, it seems that the algorithm is a little unnatural in that it doesn't use the successive estimates of the set of candidate arms and merely recommends uniformly random entries amongst all arms until it is confident it has identified the best arm overall. It would be interesting to see if it is possible to prove similar guarantees for the algorithm which recommends a random arm in the set $\mathcal{A}_{\ell}$ instead of ($\mathcal{A}_1=[m]\times [n]$) at each time step $\ell$. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review and very positive feedback. **A. Answer to Weakness 1 \& Question 1.** Thanks for mentioning these papers; we will cite them. We clarify below the differences between these papers and our contributions for Model I. **A.1.** [1, 2, 5] (see the refs in your review) do not provide entry-wise guarantees for the problem of matrix estimation with low-rank constraint under any noise assumptions. **A.2.** [3] provides entry-wise guarantees for a nuclear norm penalized estimator. [7] surveys entry-wise guarantees for spectral methods. The data model considered in these references is the matrix plus noise model, i.e., $M + E$, where the noise matrix, $E$, has independent entries. The setting we describe in Model I is different: if we write it in the form of a matrix plus noise model, we obtain that the noise matrix $E$ can be expressed as $$\forall (i,j)\in [n]\times [m], \quad E\_{i,j} =\widetilde{M}\_{i,j}-M\_{i,j} = \left(\frac{nm}{T}\sum\_{t=1}^T (M\_{i_t, j_t} + \xi\_t) 1\_{\lbrace (i_t, j_t)=(i,j)\rbrace}\right)-M\_{i,j}$$ where we use the expression of $\widetilde{M}$ provided in eq. (10) in Appendix C.1. From this, we clearly see that the entries of $E$ are not independent. Therefore, the entry-wise guarantees of [3, 7] do not apply in our setting. In fact, the leave-one-out argument relies heavily on the independence of the noise entries. **A.3.** In [4], the authors have a model where $m$ entries are observed per round, whereas in Model I, we observe one entry only. In addition, to obtain entry-wise guarantees, they resort to a couple of algorithmic tricks, explained in Remarks 2, 3, and 4 of [4]. In contrast, we show that simpler spectral methods enjoy entry-wise guarantees without any additional tricks. **B. Answer to Weakness 2 -- Regarding [4].** We actually cited this work in the related work (ref. [25] in our supplementary material), but only discussed it in Appendix H.2. We will discuss it further in the main part of the paper. **C. Answer to Weakness 3.** Indeed, our algorithm eventually recommends the same (but provably optimal) arm/matrix entry. When devising SME-AE, we had the applications mentioned in the introduction of Bayati et al. [6] (ref. in our supplementary material), where recommending the same arm is not a problem. This could become a problem in other types of recommender systems. We believe that our approach can be extended to these systems. **D. Answer to Question 2.** The results in [7] that are the closest to our Theorem 1 are Theorems 4.2 and 4.5 (Chapter 4). However, they are not really comparable to our results. Indeed: **D.1.** In [7], the results for entry-wise guarantees are for a matrix plus noise model with independent entries -- see our answer A.2. to Weakness 1 for details. In our setting, the leave-one-out argument of [7] breaks. Our proof of Theorem 1 introduces new techniques such as the Poisson approximation, needed to deal with dependencies in the noise model. **D.2.** The leave-one-out argument in [7] is only valid when the entries are sub-Gaussian. For our purposes, the observed matrix has entries distributed as Compound Poisson r.v.. Hence, we had to use different concentration inequalities than those of [7]. In particular, we needed a Truncated version of matrix Bernstein inequality (see Proposition 27 and Lemma 29 in App. D). **D.3.** Finally, the results in [7] only focus on how the guarantees scale with matrix dimensions $n$ and $m$. In addition, we care about how they scale with the confidence level $\delta \in (0,1)$, and the number observations $T$. Tracking these dependencies is not trivial. **E. Answer to Question 3.** Thank you bringing this to our attention. We will detail this part in our manuscript. In the meanwhile, we clarify below the key steps leading to the equations lines 957 to 958. First, we wish to disclose a typo in the definition of $\widetilde{S}^{\ell}$. The correct definition is: $$\forall i,j \in [n+m], \qquad \widetilde{S}^{(\ell)}\_{i,j} = \begin{cases} \widetilde{S}\_{i,j} & \text{if } i \neq \ell \text{ or } j \neq \ell \\\\ S\_{i,j} & \text{otherwise}\end{cases}$$ We can first establish $$\Vert\widehat{Q}W_{\widehat{Q}}-\widehat{Q}^{(\ell)} W_{\widehat{Q}^{(\ell)}} \Vert_F\le\Vert\widehat{Q}\widehat{Q}^\top - (\widehat{Q}^{(\ell)})(\widehat{Q}^{(\ell)})^\top\Vert_F\le\frac{4\Vert(\widetilde{S}-\widetilde{S}^{(\ell)})\widehat{Q}^{(\ell)}\Vert_F}{\sigma_{2r}(S)}$$ where the first inequality follows by definition of $W$ and the second follows from Davis-Kahan's inequality. Next, we have $$\Vert(\widetilde{S}-\widetilde{S}^{(\ell)})\widehat{Q}^{(\ell)}\Vert_F\le\Vert E_{\ell,:}\widehat{Q}^{(\ell)}\Vert_2+2\Vert E\Vert\Vert \widehat{Q} W_{\widehat{Q}}\Vert_{2\to\infty}+2\Vert E\Vert\Vert\widehat{Q}W_{\widehat{Q}}-\widehat{Q}^{(\ell)}W_{\widehat{Q}^{(\ell)}}\Vert_{2\to\infty}$$ where the inequality follows by definition of $\widetilde{S}^{(\ell)}$ and a couple of triangular inequalities and norm definitions. From the inequalities above and with the assumption that $\Vert E\Vert\le 16\sigma_r(M)=16\sigma_{2r}(S)$, we recover the inequality in lines 957 to 958. Moreover, yes, it should be $\sigma_r(M) $ (or $\sigma_{2r}(S)$) in the inequality. **F. Answers to Questions 4 to 8.** Thank you for the many suggestions and thorough reading of our manuscript. We will add Lemma 4.16 for completeness instead of simply citing it. Yes, in line 1098, $\ell - 2$ should be instead $\ell + 2$. In line 1114, $\log_2(1/\Delta_{\min})$ in the fourth inequality should be instead $\lceil \log_2(1/\Delta_{\min}) \rceil $ which we bound later by $\log_2(e/\Delta_{\min})$. Regarding sampling uniformly at random, we can relax this at the expense of technical clarity and presentation simplicity. We will provide more details for our theorems to further clarify their assumptions. Regarding Lemma 25, we ended up proving it because we couldn't find a similar result in the literature. --- Rebuttal Comment 1.1: Title: Addressed. Congratulations. Please add substantially more details in the derivations in the final version. Comment: Many thanks for your many clarifications, and for agreeing to fix the typos. Regarding the lack of independence of the entries of $E_{i,j}$ in your setting, I now understand what you mean, thank you. However, it certainly doesn't feel very significant, since you still have independence of the $\xi_t$ for various $t$s. It is a classic issue in matrix completion that we need to be careful with repeated entries, as you are there, but it is usually just (highly) technical. However, I am still very convinced by the overall significance of the work and the other results. Still, it would be nice to incorporate part of this rebuttal in the paper to better explain what you mean there. Regarding A1, I understand what you mean. However, the concept of "entry-wise" isn't completely standard, and it is still worth comparing. I personally think it is nice, and doesn't diminish the value of your work at all, that the bounds scale similarly to those works, since your bounds are entry-wise (with a max over entries) and therefore much stronger. Thanks for the clarification for question 3 as well, and for the fantastic paper and results. I incline to keep my score.
Summary: This paper studies two problems in RL involving low-rank matrix estimation. It provides entry-wise estimation error bounds for simple spectral methods in low-rank bandits and low-rank MDP under different sampling mechanisms. Based on these, it provides performance guarantees for two algorithms designed for low-rank bandit and low-rank MDP respectively. Strengths: The paper makes contribution towards the estimation of low-rank matrices in the setting of bandits and MDP respectively, as well as how these results can be useful in devising efficient algorithms to make full use of the low-rank structure. Specifically, the paper considers the challenging case of low-rank matrix estimation in the presence of dependent noise and provides sharp entry-wise estimation error bounds using involved techniques. This is likely to help the development of algorithms which are better fitted for low-rank settings. Weaknesses: 1. Compared with standard matrix completion problem, Model I in this paper extends to the case of dependent noise, but it restricts the magnitude of noise terms $\xi_t$ to be upper bounded by $c_1\Vert M\Vert_{\infty}$ (see Page 3 Line 91). Therefore, the results here can’t reduce to the state-of-the-art results in matrix completion with independent noise, where the noise level can be orderwise larger than $\Vert M\Vert_{\infty}$ [1]. We suggest the authors to discuss why their condition becomes more stringent when the noise are independent. 2. There are a few typos in the paper. For example: On Page 2 Line 71, it would be better to avoid using acronyms list “wlog”. On Page 7 Line 224, “where in a first phase” —> “where in the first phase” On Page 5 Line 175, “is a small when“ [1] Chen, Yuxin, Yuejie Chi, Jianqing Fan, and Cong Ma. "Spectral methods for data science: A statistical perspective." Foundations and Trends® in Machine Learning 14, no. 5 (2021): 566-806. Technical Quality: 3 good Clarity: 3 good Questions for Authors: On Page 7 Line 220, it is assumed that the low-rank bandit has an homogeneous rank-$r$ reward matrix M, while this is not mentioned in the corresponding result Theorem 7. This makes the setting kind of confusing. Can you provide more explanations on this? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This paper does not have potential negative social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable review and positive feedback! Please find below our responses. *Answer to Weakness 1.* **A. Relaxing the assumption on the noise upper bound.** We would like to thank you for highlighting this difference between our setting and those for matrices with independent noise (ex. Chen et al. (2021) cited as [14] in our supplementary material) and we think this is an interesting question that should be addressed in the updated version of the paper. However, we would like to stress the following two points: **A.1** First, the main focus of this paper has been obtaining error bounds with near-optimal scaling with respect to problem dimensions $n,m,T$, and addressing the problem of dependence between different entries of the observation matrix. Hence, in the analysis for Model I, we made a compromise by upper-bounding the standard deviation of the noise by $c \|\|M\|\|\_{\infty}$ for some universal constant $c > 0$ and simplifying the analysis - see Lemma 25 where we define $L = \max (\|\|M\|\|\_{\infty},\sigma)$. **A.2** We agree that state-of-the-art results for matrix recovery with independent noise have weaker assumption for the noise magnitude, but these results are applicable only to models with independent noise. The results we presented should not be reduced to independent setting, but considered as a first step to obtaining matrix recovery guarantees for matrices with dependent entries. *Answer to Weakness 2 -- Typos.* Thanks a lot for noticing. We will correct these typos. *Answer to Questions.* **B. On the setting of Theorem 7.** The assumption of homogeneous rank-$r$ reward matrix $M$ stays valid in the entirety of Section 4 and thus we also assume it when presenting Theorem 7. We will clarify this. Actually, we could remove this assumption, but at the expense of clarity and simplicity of the results. More precisely, to remove the assumption, we just need to use the explicit expressions of the constants $c_1, C_1$ in the analysis. These constants depend on $\mu$, $\kappa$, $r$, $m$, and $n$ as described Line 1093 in Appendix G. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification! I would like to maintain my initial evaluation to the paper.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Polynomial-Time Linear-Swap Regret Minimization in Imperfect-Information Sequential Games
Accept (poster)
Summary: Authors introduce a new class of correlated equilibria called linear-deviation correlated equilibria, which can be approached efficiently if all players attain sublinear linear-swap regret. They show LCEs are distinct from correlated equilibria and extensive-form correlated equilibria and the hardness of maximizing social welfare in LCEs. Finally, they show the difference in no-linear-swap-regret dynamics and no-trigger-regret dynamics to support their theoretical results on a small game. Strengths: - Introduction of new correlated equilibrium class. - Technical contribution of a polynomial characterization of the set of all linear transformations from a sequence-form strategy polytope to itself. - General algorithm for minimizing linear-swap regret. - Shows relation to existing equilibrium classes. - Very clearly written paper that I expect will be built upon in the future. Weaknesses: - This is a theoretical paper and it is not clear what would be the motivation to approach LCEs in practice, compared to another correlation classes. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - I think it would be of great pedagogical value to make a visual depiction of the different classes of equilibria similar to https://www.cs.cmu.edu/~ggordon/CE/ -- but I understand this might not be possible because the high dimensionality of the joint strategy space. - Do you think there exists a linear program to find LCEs? If not, why? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and observations on our work. Below we address the questions raised and the discussed weaknesses: (Q1) Thank you, that would indeed be interesting. As you hinted, one likely key difficulty in constructing such a representation is the fact that interesting sequential games (even when extremely simple) tend to have a relatively large number of strategy profiles, which might make a low-dimensional visualization impossible. We remark that focusing on games with sequential moves is necessary in this case, as the special case of normal-form games is not interesting enough since there the EFCE, LCE and CE all coincide. However, we will try to see what we can do for the final version. (Q2) We agree that this is a very interesting question, to which we also alluded in the future work section (Section 5). In particular, we suspect that it might be possible to devise an efficient centralized algorithm making clever use of LPs in the spirit of Ellipsoid Against Hope that is able to compute one LCE. That is, an algorithm that computes some valid LCE which, however, is not guaranteed to be optimal in any sense. On the other hand, using an LP to *optimize* over the set of LCEs is impossible due to complexity barriers (Thm 4.1). (W1) Our goal in this paper is to make progress on the challenging question of what is the strongest notion of rationality that can be efficiently attained in EFGs. We believe that the linear-deviation correlated equilibrium (LCE) is worth defining because it emerges from these no-linear-swap regret dynamics, which constitute a natural class of learning dynamics. An interesting recent development by Mansour et al. [1] is that all notions of rationality weaker than ours automatically allow the environment to exploit the agent. We believe that the play of higher-rationality no-regret learning agents is intrinsically interesting. LCE is the name of the equilibrium points reached by such agents. We expect that future work will analyze LCE’s game-theoretic properties beyond the online learning-related properties already uncovered by us and Mansour et al. [1] Yishay Mansour, Mehryar Mohri, Jon Schneider, and Balasubramanian Sivan, 2022. Strategizing against learners in bayesian games. COLT. --- Rebuttal Comment 1.1: Comment: Thank you for further enlightening on the topic! Especially for the reference to the new recent work, that is interesting.
Summary: This paper studies the convergence of uncoupled strategies to a weakened notion of equilibria called "linear-deviation correlated equilibrium". This equilibrium is reached when all players minimise the no-linear-swap regret which is a specialisation of Phi-equilibria when the set of deviations Phi is the set of all linear transformation from the set (mixed) strategy to itself. The authors provide efficient implementation of such no-linear-swap regret algorithms and prove that the set of linear-deviation correlated equilibria is a subset of extensive form correlated equilibria. The technical tool used is the of of Gordon 2008 which converts a regret minimiser for the space of deviations to a regret minimiser for the strategy set. Strengths: The paper aims to solve an important problem, which is the complexity of CE in EFG. The paper is easy to read, clear and well written and the results seems non-trivial Weaknesses: The notion of LCE is not well motivated. The authors prove that LCE are EFCE but, while EFCE are well motivated in EFG, LCE do not seems to have any practical meaning. Also the algorithm presented in the paper is slower at converging to EFCEs then the one specifically designed for them (Figure 2 left) and thus do not contribuite in this regard. The main weakness is that LCE are not an interesting equilibrium to converge to, except from their connection to EFCEs. The author should motivate the introduction of this new equilibria for example by finding examples in which LCE are a reasonable solution concept while other EFCEs are not. Without this the point the paper seems a technical exercise. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) Th 31 seems to be the main technical statement of the paper. What are the main challenges of proving this statement? A better intuition of its implications/design would be appriciated 2) Motivate LCE (see Weaknesses) 3) Figure 2 only proves that the inclusions of CE, LCE and EFCE are strict, but is somewhat uninteresting once we have a theorem stating it. What happens is in Figure 2 left you put there time instead of iterations? If the no-linear-swap regret algorithm runs faster then the one that minimises only the trigger-regret this would be a better algorithm for finding EFCEs. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and observations on our work. Below we address the 3 questions raised: (Q1) Thanks for the feedback. As mentioned to Reviewer vZdR, we will use the extra content page to revise our paper and include a more detailed intuition of its main proof ideas. We include a high-level, intuitive description of the theorem and the challenges arising in its proof in our response to Reviewer vZdR. (Q2) We disagree that LCE is interesting only in relation to EFCE. No-trigger-regret agents are robust (hindsight rational) only to trigger deviations, which are a measure-zero set of linear transformations. The question as to what is the highest notion of regret that can be minimized efficiently in imperfect-information sequential games is natural and established. An interesting recent development by Mansour et al. [1] is that all notions of rationality weaker than ours automatically allow the environment to exploit the agent. We believe that the play of higher-rationality no-regret learning agents is intrinsically interesting. LCE is the name of the equilibrium points reached by such agents. We expect that future work will analyze LCE’s game-theoretic properties beyond the online learning-related properties already uncovered by us and Mansour et al. [1] Yishay Mansour, Mehryar Mohri, Jon Schneider, and Balasubramanian Sivan, 2022. Strategizing against learners in bayesian games. COLT. (Q3) One thing we wanted to demonstrate with Figure 2 is that in practice there is indeed a perceptible difference between the regret achieved by no-swap and no-trigger regret dynamics, even in fairly small and simple games. Our no-linear-swap-regret algorithm can be used to minimize any kind of regret that linear-swap deviations subsume (such as the trigger regret). However, the per-iteration time complexity of our linear-swap regret minimization algorithm is worse than that of the trigger-regret minimization algorithm, due to the projection step and the explicit computation of the fixed point (steps (5) and (6) of Algorithm 1). While the algorithm that we provide in the paper (based on the ellipsoid method) is enough for showing that minimizing linear-swap regret can be done in polynomial-time, we suspect that a better understanding of the geometry of the linear deviations polytope could enable sidestepping the expensive ellipsoid method used in our projection step and lead to provably and practically better running times. We leave this interesting question for future work. --- Rebuttal Comment 1.1: Comment: I thank the authors for taking the time of answering my questions. I now understand better the questions that the paper tires to answer. However I'm confused about the arguments regarding [1]. Are the authors claiming that their algorithm can minimise linear swap regret also in bayesian games analysed in [1]? Moreover I still believe that while a general analysis of LCE can be deferred to future works, I think that a brief analysis of their features would greatly improve the paper. In particular a toy example in which EFCEs and LCEs leads to substantially different behaviour would be enough. I now increase my score. Depending on the authors answers I'll be willing to increase further my valuation. [1] Yishay Mansour, Mehryar Mohri, Jon Schneider, and Balasubramanian Sivan, 2022. Strategizing against learners in bayesian games. COLT. --- Reply to Comment 1.1.1: Comment: Thank you a lot for engaging in this discussion and for providing continuous feedback on our paper. Indeed, our algorithm can minimize linear-swap regret in Bayesian games with a finite set of player types $\Theta$. This follows from the fact that extensive-form games are general enough to capture Bayesian games as well. In particular, we can represent any Bayesian game by using a game tree whose root is a chance node, which randomly initializes the player type to be one of the available $|\Theta|$ types. Information sets are set up so that each player can distinguish only their type and not the other players’. We have also included a short discussion of this in lines 118-123 in the Introduction of our submitted paper pdf. Regarding the brief analysis of the features of the LCE, we believe that the Example E.1 from the appendix, used to prove that LCE $\neq$ EFCE, is an interesting toy example showcasing the difference in behaviors that can be exhibited by these equilibria. In this case it happens that the linear deviations capture all possible deviations and, thus, LCE = CE. The game given in the example is a classic instance of a signaling game that has been extensively discussed in the past (eg. see von Stengel and Forges [2008] for an extensive discussion of the different behaviors that are possible for the EFCE and CE, which in our case is equal to the LCE). We will incorporate this discussion in the final version. [1] B. von Stengel and F. Forges. Extensive-form correlated equilibrium: Definition and computational complexity. Mathematics of Operations Research, 2008.
Summary: The paper studies regret minimization in extensive-form games (EFG). Specifically, they study a notion of regret called linear-swap regret, which measures the regret against linear transformations of the player's sequence-form strategies. This notion is stronger than trigger regret (as trigger deviations can be described as a linear transformation of sequence-form strategies) but weaker than swap regret. The main technical contribution lies in Theorem 3.1, which states that the set of linear transformations (from the set of sequence-form strategies to itself) is a compact polytope defined by a polynomial (in the game tree size) number of linear constraints. With that in hand, they leverage the template of Gordon et al. [2008] that produces a $\Phi$-regret minimization algorithm given a no-external-regret minimizer $\Phi$ and a fixed point oracle which returns fixed point of transformations in $\Phi$. In this context, the $\Phi$-regret minimization can be chosen, for example, to be Online Gradient Decent; and the fixed point oracle can be implemented using a Linear Programming Solver (e.g., Ellipsoid algorithm). This results in an efficient algorithm that achieves $\sqrt{T}$ linear-swap regret; which in turn also implies an efficient method for finding the correlated equilibrium that corresponds to linear transformations, that the authors call linear-deviation correlated equilibrium (LCE). Finally, the authors provide examples to show that the set of LCEs strictly contain**s** the set of CEs, and is strictly contain**ed** in the set of Extensive-form correlated equilibrium (EFCE). They also show that the set of Behavioral correlated equilibrium (BCE) and the set of LCEs are incomparable. Strengths: The paper is written well; the authors give great background on EFGs, on different notions of correlated equilibrium, and their relation to different notions of regret. The construction of the algorithm and results are fairly easy to understand, and the authors express well the difference between their results and previous work by providing examples that compare their notion of equilibrium with existing notions. Finally, I agree that determining *"what is the strongest notion of rationality that can be attained efficiently"* is an interesting open problem, and indeed the paper provide some progress in this direction. Weaknesses: - There is no obvious algorithmic contribution here: As noted, Theorem 3.1 is the main technical contribution which is purely geometric. Given that, constructing the algorithm is fairly straightforward with the scheme of Gordon et al. [2008]. - While the background the paper gives is quite solid, it is not very proportional to the relatively small amount of attention the authors give to their technical contribution. In particular, while the statement of Theorem 3.1 is clear, it is not very intuitive, and it is hard to understand why it is correct. This is also the main reason that my confidence score is relatively low. I think that some of the preliminaries, as well as the empirical evaluations, could be deferred to the appendix - instead, I think that giving the main proof ideas is more important. **Missing related work:** I think that the work of Anagnostides et al. (2022) is very relevant here and motivates important questions for future work. Specifically, this work achieves a much better rate of $\log T$ for trigger regret. Thus, it is fairly natural to ask whether this is also attainable for linear-swap regret, especially since they also built upon the template of Gordon et al. [2008]. Anagnostides, I., Farina, G., & Sandholm, T. (2022). Near-Optimal $\Phi $-Regret Learning in Extensive-Form Games. arXiv preprint arXiv:2208.09747.‏ Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and observations on our work. Below we address the weaknesses discussed: (W1) The geometric contribution is the key to the algorithmic contribution of the paper: the structure provided by our characterization theorem (Thm 3.1) is what enables constructing agents minimizing regret with respect to all linear deviations in polynomial time in imperfect-information sequential games. Hence, one cannot untangle the two: without the geometric contribution, there is no algorithmic contribution. As correctly mentioned in the review, Gordon et al. [2008] provide an elegant recipe for constructing a no-$\Phi$-regret algorithm, as long as two key ingredients can be provided: a polynomial-time (per iteration) no-external-regret algorithm for the set of deviations $\Phi$, and an efficient fixed point oracle. The first of these components can be rather complicated, as is the case in our paper, since the structure of the set of deviation functions $\Phi$ can be arbitrarily complex. (W2) Thank you for proposing this improvement in our paper presentation. Since the camera ready allows an extra content page, we will use that to include a more detailed intuition of its main proof ideas. We include here below a more high-level, intuitive description of the theorem and the challenges arising in its proof. The proof proceeds by induction on the game tree as follows: * The Base Case (appendix, pg. 15) corresponds to the case of being at a leaf decision point, that is, an information set for which all actions lead to termination of the game. In this case, the set of deviations corresponds to all linear transformations from a probability $n$-simplex into a given polytope $\mathcal{P}$ in $\mathbb{R}^d$. This set is equivalent to all $d$ x $n$ matrices whose columns are points in $\mathcal{P}$, which is not hard to verify mathematically. This corresponds to constraint (3) of our characterization. * For the inductive step, we are at an intermediate decision point (information set of the learning player), that is, one for which at least one action leads to further decision points. Let j be the decision point. In Lemma B.8 we show that any terminal action a at j (that is, such that no further decision points follow under that) leads to a column in the transformation matrix that is necessarily a valid point in the polytope $\mathcal{P}$. This is similar to the base case, and leads to constraint (3) in our formulation. In Lemma B.7, we look at the other case of a nonterminal action a at j (that is, such that one or more further decision points follow under that) is such that the corresponding column in the transformation matrix is identically 0 (constraint (4)) without loss of generality. This allows for the "crux" of the transformation to happen in the subtree rooted at ja. A key difficulty is in characterizing all valid transformations at such a subtree. In particular, several decision points can have ja as their parent sequence. The set of strategies in the subtree rooted at ja is therefore in general the Cartesian product of subtrees rooted at each of the decision points whose parent sequence is ja. This intuitively explains the need for the (fairly technical and involved) Proposition B.4, whose goal is to precisely characterize valid transformations of Cartesian products. The characterization for the Cartesian products leads to constraints (5) and (6) in our final characterization. (Missing related work) Thank you for the intriguing question, we will definitely include a discussion about that paper in the revision. The question as to whether $O(\log T)$ regret per-player can be attained in self-play is very interesting. While the paper you mentioned proposes a general methodology that applies to CEs in normal-form games and EFCE/EFCCE in sequential games, the authors’ construction fundamentally relies on being able to express the fixed points computed by the algorithm as (linear combinations of) rational functions with positive coefficients of the deviation matrices. In the case of CEs, this characterization follows from the Markov chain tree theorem, while in EFCE and EFCCE this fundamentally follows from the fact that the fixed points can be computed inductively, solving for stationary distributions of local Markov chains at each decision point. In the case of linear transformations considered in our paper, such a local characterization of the fixed point is unknown. We will definitely include a discussion about this, thanks again for the suggestion. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response - I have no further questions at this point.
Summary: This paper focuses on addressing the challenge of minimizing linear swap regret in extensive form games, which is considered a stronger notion compared to trigger regret in Extensive Form Correlated Equilibrium (EFCE). To achieve efficient implementation of the Phi-regret minimization problem (where Phi set contains all linear swap transformation), the authors provide a polynomial-sized description of all linear swap operators. This result shows that there exists a no linear swap regret algorithm with polynomial-time iteration complexity based on the size of the game tree. Additionally, the paper presents experimental results that illustrate how the equilibria achieved through the no linear swap regret algorithm exhibit greater strength compared to EFCE. Strengths: This paper introduces a novel and essential contribution by providing a polynomial characterization of the set of all linear transformations from a sequence-form strategy polytope to itself. This characterization holds significant value for future research on linear-swap-regret. Weaknesses: The contribution of this paper may be perceived as limited. The primary and significant contribution lies in the polynomial description of the linear swap transformation set. Given the sequential structure of the sequence form strategy, this finding is not entirely unexpected. It appears that the paper lacks some intuitive explanations regarding the concept of linear swap (further details can be found in the questions raised). There is a sense that LCE (Linear Swap Correlated Equilibrium) is merely one equilibrium between CE (Correlated Equilibrium) and EFCE (Extensive Form Correlated Equilibrium), which is relatively easy to compute. The experimental results may not comprehensively demonstrate the potential and strength of LCE Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Is there any intuition behind the linear swap transformation, apart from its linearity? For instance, can LCE be interpreted with the presence of a mediator? Why do you believe LCE is an equilibrium worth considering? 2. Maximizing social welfare is known to be challenging. However, in your experiments, did you observe any significant differences in social welfare between No-linear-swap-regret dynamics and No-trigger-regret dynamics? 3. Since No-linear-swap-regret dynamics are stronger than No-trigger-regret dynamics. The result in Figure 2 is expected. What about the running times? Is the running time of no-linear-swap-regret dynamics substantially larger? [1] Zhang, Brian, and Tuomas Sandholm. "Polynomial-time optimal equilibria with a mediator in extensive-form games." Advances in Neural Information Processing Systems 35 (2022): 24851-24863. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have partially addressed the limitations of their work, though there is space for improvement (see the section Weaknesses). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments and observations on our work. Below we address the questions raised. (Q1) We believe linear-deviation correlated equilibria (LCEs) are most naturally understood as the name of the equilibrium points that emerge from the higher-rationality no-regret learning agents we construct. To recap briefly, our main goal in this paper is to make progress on the challenging question of what is the strongest notion of no-regret learning/hindsight rationality that can be attained in polynomial time in imperfect-information sequential multiagent settings. The best prior results only applied to extremely structured and isolated subsets of linear transformations, including EFCE deviations and communication deviations (exhibiting a large gap compared to what is instead possible in matrix/nonsequential games, where no-swap-regret can be attained efficiently). In our paper, we show that learning to be robust (i.e. not regret) **any** linear transformation can be guaranteed in polynomial time, subsuming virtually all prior known notions (including EFCCE, EFCE, and the very recent work on communication equilibria in Bayesian games by Fujii [1]). Another reason to care about this notion of rationality is that it has been recently shown by Mansour et al. [2] that all weaker notions of rationality automatically allow the environment to exploit the agent. One can also think of LCE in the context of a mediator (correlation device) that samples strategy profiles and recommends the corresponding pure strategies to players. The mediator’s concern is to find a correlated distribution of play such that no player would be better off by deviating unilaterally using any linear transformation of their strategy. This is akin to, but significantly more general than, EFCE, where the players can only use trigger deviations, a special class of linear transformations. However, while trigger deviations can be re-interpreted as the players being able to transform behavior conditionally only on the recommendation of one information set, we do not know if such an intuitive interpretation can be given about the set of all linear deviations. While this paper is mostly focused on the learning and computational aspects, exploring these game-theoretic modeling questions for LCE is an interesting question for future work. [1] Kaito Fujii, 2023. Bayes correlated equilibria and no-regret dynamics. Arxiv. [2] Yishay Mansour, Mehryar Mohri, Jon Schneider, and Balasubramanian Sivan, 2022. Strategizing against learners in bayesian games. COLT. (Q2) Thanks for the interesting question. Since LCE is a subset of EFCE, the maximum social welfare that can be achieved by LCE can only be $\leq$ of that reachable by EFCE (intuitively, the agents are more rational, so it takes more effort to incentivize them towards any correlated behavior). Indeed, we empirically observe that the utility reached by LCE is lower than that of EFCE in the experiments of Figure 2. You can observe this for example in the experimental data we included in the supplemental material, together with the implementation of our dynamics. (Q3) One thing we wanted to demonstrate with Figure 2 is that in practice there is indeed a perceptible difference between the regret achieved by no-swap and no-trigger regret dynamics, even in fairly small and simple games. The per-iteration time complexity of our linear-swap regret minimization algorithm is worse than that of the trigger-regret minimization algorithm, due to the projection step and the explicit computation of the fixed point (steps (5) and (6) of Algorithm 1). While the algorithm that we provide in the paper (based on the ellipsoid method) is enough for showing that minimizing linear-swap regret can be done in polynomial-time, we conjecture that a better understanding of the geometry of the linear deviations polytope could enable sidestepping the expensive ellipsoid method used in our projection step and lead to provably and practically better running times. We leave this interesting question for future work. (W) Finally, we would like to leave a small comment on the sentence "Given the sequential structure of the sequence form strategy, this finding is not entirely unexpected." from the Weaknesses section. Specifically, we would like to highlight that the proof of this characterization is very far from trivial, as is also evident from the appendix included in the supplemental material. --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions! I have increased my score to 6.
Rebuttal 1: Rebuttal: We thank all the reviewers for the detailed comments and constructive feedback. We have addressed the reviewers’ comments/questions individually below.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Large Language Model Guided Tree-of-Thought
Reject
Summary: This paper presents an approach (called Tree-of-Thought, or ToT) for boosting the problem-solving abilities of LLMs by means of backtracking in solution space. The proposed ToT architecture augments the LLM with four modules, and is broadly framed to include multiple potential implementations of those modules, including neural networks (for the prompter and controller) that could be trained by means of policy gradients. One implementation of ToT is evaluated on Sudoku, where it demonstrates significant improvement over an LLM without the ToT augmentations. Strengths: The motivation is highly topical, as improving the reasoning power of LLMs is a challenge of great research interest and economic value. Most of the paper’s presentation is clear, and the experimental results seem fairly sound, as far as they go. Weaknesses: The work is limited in four ways. 1 - While the paper lays out a fairly sophisticated and general architecture, only one narrow implementation of ToT is actually tested. Specifically, the current version does not include the policy-gradient options laid out in Algorithm 1 and equations 1-4. Instead, the tested implementation uses the much simpler rule-based controller and LLM-based prompter that have no additional weights to train. This makes it impossible at present to draw conclusions regarding the effectiveness of more advanced realizations of the architecture. 2 - The evaluations, limited as they are to Sudoku, do not demonstrate the generality of the approach, despite the assertion that "in principle it can handle many other mathematical and logical reasoning tasks''. For comparison, consider the recent work of Yao et al., 2023, *Tree of Thoughts: Deliberate Problem Solving with Large Language Models*, which reports evaluations on three separate problem types: Game of 24, Creative Writing, and 5x5 Crosswords. 3 - Since humans play Sudoku in graphical rather than language-based forms, there is little reason to expect text-only LLMs to perform particularly well on Sudoku at all. And for any algorithmic puzzle (like Sudoku) for which solutions can be explicitly verified, it is unsurprising that explicit checking for valid solutions would boost the performance of an LLM alone. The more central question is how much LLMs contribute to solving such puzzles, but this question is not addressed. The *baselines* tested (zero-shot, one-shot, and few-shot) are actually just *ablations* of ToT, all of them using LLMs. 4 - This paper discusses none of the prior work on Sudoku, such as *SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver* by Wang et al., 2019, which provided a Sudoku benchmark dataset that was used by Ahmed et al., 2013, in *Semantic Strengthening of Neuro-Symbolic Learning*. The *Neural Logic Machine* of Dong et al., 2019, has also been applied to Sudoku (https://github.com/ashutosh1919/neuro-symbolic-sudoku-solver), achieving a 94% success rate on 5x5 puzzles, which is significantly better than the ToT results reported in this paper. Because of these limitations, the work’s contributions do not seem significant at this point. Nevertheless, the general architecture might prove to be significant once it is fully implemented and tested on a broader set of benchmarks for which baseline results from the literature are available for comparison. Technical Quality: 3 good Clarity: 3 good Questions for Authors: How many Sudoku instances were tested to produce each bar in Figure 2? Just 10 problems each? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 1 poor Limitations: Regarding the first limitation described in the Weaknesses section of this review, the paper states "We expect a neural network based ToT controller will perform better, which we will verify in the future extension of this work." However, the crucial point is that policy-gradient training is not evaluated at all, and this limitation is never stated with sufficient clarity. The other three limitations are not adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
null
Summary: This paper present tree-of-thought (ToT) as a way of using LLMs to solve problems. ToT involves search + backtracking in a tree-like structure. The work demonstrates the success of this method in simplified sudoku tasks. Strengths: The idea is interesting, and the Sudoku tasks are a reasonable regime for evaluating it. Weaknesses: While the method is interesting, the evaluation is minimal. Furthermore, while the sudoku tasks are reasonable as a single setting for evaluation, as the only setting they are quite a narrow domain. The paper also seems to be missing some valuable ablations/baselines. In more detail: 1) The ToT method appears to contain many details which are not individually tested with ablations. Without testing these, it is unclear which aspect is relevant, and what we should learn from the work. - For example, the paper emphasizes the benefit of the tree-structured generation, which allows the model to backtrack in its reasoning. However, if I understand correctly, none of the baseline experimental conditions include *any* form of generating multiple answers, or using the checker to verify the answer is correct. For example, rather than a tree search, the model could just run 10 complete rollouts (productions of the correct answer), and then use the checker to identify the correct one (if any). Or, the model could use the checker to decide at each step whether to accept a generation, but without backtracking further. Would one of these methods perform as well? If so, then the key feature is not the tree structure per se, but merely the possibility of generating multiple completions and then checking. In order to understand the contributions of the method, it would be necessary to see these ablations. - Likewise, the ToT method appears to benefit from a prompter policy which can decide on the examples for the prompt (if I understand correctly), while the other methods don't; perhaps a prompter is all that's needed. - More fundamentally, if I understand correctly (although the paper is not entirely clear on this point), the ToT method is the only one that involves training; there are many ways the one/few-shot prompts could be "trained", such as selecting the examples in the prompt. Ideally, the methods would be evaluated in a setting where the baselines can also benefit from training. 2) The evaluation is quite minimal, in both scope and depth. - The method is only tested on a single domain (simplified sudoku, without the box constraints); sudoku is a well-known puzzle, and so is likely to be well-covered in the training corpus. While that does not mean the present results are invalid, it would be useful to see a demonstration of the idea in other, less commmon domains (even other NP ones, such as Traveling salesman, are likely much rarer in the training corpus), or ideally fully novel ones. - What evaluation there is only involves a handful of puzzles (10 per condition). The differences between ToT and FS in each condition would not achieve statistical significance by an exact binomial test. While the consistent benefit of ToT across conditions makes it more plausible that there is an overall positive effect, it would be useful to run a larger number of puzzles per condition in order to more accurately assess the magnitude of that effect. 3) The paper could do a more thorough job of situating the present work within the existing literature with similar motivations, e.g. https://arxiv.org/abs/2208.14271 or https://proceedings.neurips.cc/paper/2021/hash/d3e2e8f631bd9336ed25b8162aef8782-Abstract.html 4) The paper presents itself as though it evaluates on Sudoku, but in fact it evaluates on a simplified variant (no boxes). The present abstract, for example, seems somewhat misleading in that it never makes clear that the algorithm is tested on a smaller, simpler version of the task. 5) It would be nice to see comparisons with other language models (especially smaller ones, and ones with an open training process) to understand how general the results are. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Is there something I misunderstood about the baselines; e.g. are some of them also matched to ToT in terms of training/optimization or compute/calls to the checker? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Yes, limitations are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
null
Summary: The paper presents a novel algorithm ToT (tree-of-thought) based on: - LLM (GPT 3.5 in this case) - checker module (which verifies solutions and partial solutions) - memory module - ToT controller that guides the search (it can be a neural network or a set of rules) - prompter agent (in this paper this is a policy network that selects the best in-context examples for a given tree node The toT algorithm is tested on a challenging Sudoku task for 3 different board sizes. Strengths: The approach is novel. It mixes general LLM with two neural networks, trained together. The introduction Section is good: it identifies two main limitations for using LLMs in complex problem-solving. Sudoku is a complex task that requires backtracking and a search, which makes it interesting in the context of ToT. The training of policies is an interesting idea that could be of use in other LLM-based algorithms. Weaknesses: The biggest weakness of this paper is the small number of experiments, which also are conducted on a single task (sudoku). In the text, many different versions of ToT are discussed, however, experiments are done only for a single setup and a single task. ToT was not tested on any other task, thus we cannot know if it really generalizes at all. There are far too few experimental results and data. What is missing: - How many nodes ToT needs on average to solve a given task? - How many steps baseline needs to solve a given task? - There are only 3 versions of the Sudoku: for n=3, 4, and 5. (this would be ok if there was more tasks). What happens for n >= 6? If ToT is still the best then it would be great for the method. If not, we will clearly know where is the limit for ToT. If the method is too slow for n>=6 it is important to know. - What is the price (or number of tokens needed) on average for a single ToT run? - What was the number of Sudoku puzzles used for evaluation? It is not clearly stated in the text, however, I guess it was 10 boards for each n=3,4,5. If I am wrong please correct me. If it was 10 boards then the results cannot be trusted at all. In such a case for the 0.4 success rate, the 90% confidence interval is (0.15 - 0.7), which tells nothing about the real results. Results on 10 testing boards have no scientific meaning and this is the main reason for such a low score I gave. If I am wrong (and the number is higher I will be happy to improve the score). -There are no error bars in Figure 2. Authors claim that: "If the ToT framework can solve instances of the generalized Sudoku [...] in principle it can handle many other mathematical and logical reasoning tasks.". The claim that ToT should be able to handle other complex problems is based on the idea that complex tasks require a similar way of thinking More advanced mathematical problems like automated theorem proving have their own sources of complexity (e.g. choosing the appropriate lemmas to consider or how to represent the state in a compact form, which fits to the transformer). I know that authors do not claim that for sure ToT works in such tasks, but after reading the paper it seems that the significance of the paper is built upon a promise that ToT can be easily adapted to more serious problems. Since there are no experiments to support this claim I think that the significance of this paper is limited. There are no experiments concerning other variants of ToT: for example with neural network checker or rule-based policy. Notation in Algorithm 1 is hard to understand. I had trouble understanding which \pi stands for ToT policy and which for prompt policy. Please consider more natural notation like \pi_{tot} \pi_{prompt} or similar. Algorithm 2. The meaning of (nil) is not introduced in the algorithm, it is only later in the text. I think that the version of the text review should use specific LaTeX options: for example line numbers. It is hard for me to refer to concrete lines without them. It makes no sense for me to describe the procedure of training ToT policy if it was not used in the experiments. Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: The most important: what was the number of test boards for each n? What is the hierarchy theorem? You should briefly explain it in the paper for readers who are not familiar with complexity theory. Equations (1) and (3): what is s_i precisely? How it is represented? What networks were trained in the experiments? It is hard to find the text. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 2 fair Contribution: 2 fair Limitations: In the paper, there is no separate section for limitations (some ar ementioned in Section 5). Many missing limitations I already described in the Weaknesses Section of this review. Flag For Ethics Review: ['No ethics review needed.'] Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations. Code Of Conduct: Yes
null
Summary: The paper introduces the Tree-of-Thought (ToT) framework, a novel approach to enhance the problem-solving capabilities of large language models (LLMs). The ToT technique mimics the human mind's trial-and-error thought process, allowing LLMs to explore the solution space of complex reasoning tasks and backtrack when necessary. The paper presents an implementation of a ToT-based Sudoku puzzle solver and evaluates its effectiveness on a suite of Sudoku puzzle benchmarks. The experimental results show that the ToT framework can significantly increase the success rate of Sudoku puzzle solving. The contributions of the paper include introducing a new approach to enhance the problem-solving capabilities of LLMs, presenting an implementation of a ToT-based Sudoku puzzle solver, and demonstrating the effectiveness of the ToT framework on a suite of Sudoku puzzle benchmarks. Strengths: 1. The motivation for moving from linear reasoning, like Chain-of-thought, to a tree-like searching/reasoning is strong and well recognized. Considering the fundamental limitation of autoregressive generation of GPT-like LLMs, we do need more advanced reasoning/search algorithms for better decoding. 2. The proposed method is reasonable and technically sound. The checker module echos the recent findings of self-evaluation of LLMs, and the memory module also is useful in agent-based modeling. 3. The empirical results on the Sudoku pizzles show the effectiveness of the proposed method, esp. when the puzzle becomes harder, the performance of the proposed method is still good. Weaknesses: 1. One of the biggest issues of this paper is the mismatch between the described method and the actual one used in the experiments. The paper spends lots of space talking about how the ToT controller and prompter agent can be modeled by a policy network and trained via multi-agent RL. But it never tried such formulation and training in the experiments and only presented them as some kind of future work. Without valid evidence, empirically or theoretically, the method section is largely questionable. 2. Another issue is the novelty of this work probably is not as big as the paper claims. The formulation of multi-agent RL for controller and agent probably is overcomplicated, and I encourage the authors to think of reformulating them all as LLMs to make things easier. Also, there are many missing related works [1, 2, 3, 4] that have a similar tree search/reasoning formulation with more operational and rigorous experiments. 3. The experiment scope is limited. The proposed method is only demonstrated in one task with the simple formulation of the controller and agent (discussed above). This isn't ok for NeurIPS papers, and we need to better figure out why and how the proposed method can or cannot be applied to more general tasks. Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: I raised some questions in the weaknesses section, and there could be lots of improvement space for the authors to make and answer those questions. The following are some minor comments: 1. It seems the catchy name of Tree-of-though has been popularized by another work [5], which draws far more attention than this one; I'd suggest the authors rethink the core contributions of the proposed method and position it in a different and unique way 2. I wonder whether the authors could explicitly explain what kind of search algorithms are used in the proposed method for a better understanding of the method. [1]. Xie, Yuxi, et al. "Decomposition enhances reasoning via self-evaluation guided decoding." arXiv preprint arXiv:2305.00633 (2023). [2]. Jung, Jaehun, et al. "Maieutic prompting: Logically consistent reasoning with recursive explanations." arXiv preprint arXiv:2205.11822 (2022). [3]. Zhu, Xinyu, et al. "Solving math word problem via cooperative reasoning induced language models." arXiv preprint arXiv:2210.16257 (2022). [4]. Hao, Shibo, et al. "Reasoning with language model is planning with world model." arXiv preprint arXiv:2305.14992 (2023). [5]. Yao, Shunyu, et al. "Tree of thoughts: Deliberate problem solving with large language models." arXiv preprint arXiv:2305.10601 (2023). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
null
Rebuttal 1: Rebuttal: We sincerely extend our gratitude to the reviewers for their valuable feedback, especially for the thoughtful suggestions regarding the evaluation method, ablation studies, and the discrepancy between the algorithm presented in the paper and its actual implementation for the experimental study. Your input has been very helpful, and we will integrate these suggested changes into the future version of the paper. Appreciate your feedback.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a tree-of-thought (ToT) framework to improve complex reasoning and problem solving capabilities of auto-regressive language models. Specifically, motivated by how humans process thoughts with trail and error, ToT maintains a memory module, and employs a ToT controller to decide when to proceed on a thought of nodes in a tree, and when to backtrack to a previous parent node depending on a tracker. Evaluated on Sudoku 3x3, 4x4, and 5x5 puzzles, results suggest that ToT is effective compared to few-shot baselines. Strengths: 1. This paper presents an interesting tree of thought method to enable back-tracking in auto-regressive language models. This solves one of the key limitations of LLMs. Similar to humans and inspired by system 2 reasoning, the proposed ToT structure, especially how we can employ a checker to dynamically modify and utilize memory, makes a great contribution to the field, and can inspire future work on how LLMs can be prompted, and even pre-trained. Weaknesses: 1. Although the high-level idea of tree-of-thought is promising, with corresponding ToT controller, agent, and memory, the paper is only evaluated on one Sudoku task, especially when the details of evaluation (e.g., number of games evaluated, and computational cost and prompts used compared to the baselines) are not specified. This makes the evaluation results less convincing. Moreover, despite that the method sounds generalizable, there is no strong evidence on how each module in the framework should be actually implemented to be effective and demonstrated (apart from simply mentioning some future work in the end). 2. It is not clear how each module in the ToT framework should work in details. For example, the memory module seems to be compelling where the LLM can retrieve previous configuration when backtracking, there is no explicit demonstration of how the memory is maintained, and how backtracking would work. Furthermore, the ToT controller is rule-based in the experiment, but Section 3.2 and 3.3 mostly explains how the ToT controller should be trained similar to a policy network. This makes it very confusing to judge the proposed method. I would suggest the authors to add more detailed illustrations using specific examples in the paper revision. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Can you provide more evaluation details for the Sudoku setup? For example, how many games are used for evaluation, and what prompts are employed to the LLM (especially when comparing to the baselines). 2. Why do you use a rule-based controller for backtracking? How do you derive the rules? 3. Can you provide more details in terms of how backtracking interact with the memory? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The authors only briefly mentioned the limitations in terms of implantation, but not the limitations of the method overall, such as computational cost. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
null
null
null
null
null
null
null
A Specialized Semismooth Newton Method for Kernel-Based Optimal Transport
Reject
Summary: The authors propose an implementation of Vacher et. al. (2021) based on a Semi-Smooth Newton (SSN) scheme. They reformulate their optimization problem as a root finding problem (Proposition 3.1) to which they apply the SSN scheme. They provide convergence guarantees (Theorem 3.3) that gives a $O(1/\sqrt{T})$ convergence rate where $T$ is the number of iterations and provide an efficient way to reduce the cost per iterations (l.184 - l.224). Then they provide numerical experiments to validate that their method is faster than the one proposed in Vacher et. al. (2021). Strengths: Trying to get a scalable version of kernel based OT is a very legit topic as current implementations are slow and impossible to run on real data sets. Indeed, recall that using an Interior-Point-Method, kernel based OT was solved with a precision $\epsilon$ in $O(n^{3.5} \log(n/\epsilon))$ time where $O(n^3)$ comes from the cost per iteration and $O(\sqrt{n}\log(n/\epsilon))$ is the number of iterations. In this paper, the main contribution is to get rid of the dependency of $n$ in the number of iterations which is indeed a desirable feature. From my understanding, the authors can solve kernel based OT with precision $\epsilon$ in $O(1/\epsilon^2)$ iterations. Weaknesses: I believe that the authors oversell the work. As can be deduced from my comment above, the proposed method requires $O(1/\epsilon^2)$ iterations for a precision $\epsilon$ while previous work requires $O(\sqrt{n}\log(n/\epsilon))$ iterations. When a high precision a sought after $\epsilon \to 0$, the proposed algorithm is indeed less efficient. The authors should have explicitly mentioned that. Furthermore, nothing precise is said on the cost per-iteration which is a crucial component of the practical efficiency. We can vaguely guess that it is $O(n^3)$ but it is stated nowhere. The overall writing is confusing, the whole part on the computational efficiency should be clearly stated in a theorem or a proposition. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: I am actually skeptical on the $O(1/\sqrt{T})$ convergence rate. What is the dependency in the regularizers $\lambda_1, \lambda_2$ and more generally in the condition number? In the case of Vacher et. al. (2021), there is little dependency in the condition number as they use an IPM-like method. Do SSN methods also weakly depend on the conditioning of the problem? Note that this is a crucial aspect as these regularizers implicitly depend on the number of samples $n$. On the experimental side, it is claimed that the proposed method is faster. Yet which stopping criterion was used? Was it the same for both algorithms? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors do not compare with enough precision their algorithm with the existing one, both in theory and on practice. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and your input. We hope that with our answer below we will convince you about the merits of our work. Below, we reply to your main questions point-by-point and have included these discussions in the revised version of our paper. 1. **The proposed method requires $O(1/\epsilon^2)$ iterations while IPM requires iterations $O(\sqrt{n}\log(n/\epsilon))$. Considering the high precision, the proposed algorithm is indeed less efficient.** We agree that our method is less efficient for the case of high precision (i.e., $\epsilon \rightarrow 0$) but argue that the better dependence on the sample size $n$ is more desirable since the large sample size is necessary to ensure better statistical approximation. In particular, the same authors of Vacher et. al. (2021) has mentioned in their subsequent paper (see [38], Page 11-12): *This method has two drawbacks: first, its cost is prohibitive when the number of samples becomes large, which is necessary to ensure better statistical approximation, and second, $\ldots$* Thus, it is important to design the new kernel-based OT algorithms that have better dependence on $n$. Such discussion about the trade-off between $n$ and $1/\epsilon$ has occured in the community. Indeed, the plug-in OT estimation can be formulated as a linear optimization problem and solved by the specialized IPM within $O(\sqrt{n}\log(n/\epsilon))$ iterations. It can be also solved by Sinkhorn method within $O(1/\epsilon^2)$ iterations. Despite the worse dependence in terms of $1/\epsilon$, the Sinkhorn method has been recognized as more efficient than IPM in most cases since many OT application problems require low-accurate solution ($\epsilon \sim 10^{-2}$) when the sample size $n$ is very large. Along this direction, we provide a further step into the kernel-based OT algorithmic design and we hope that our idea may be useful more broadly. We remark that our work does not downgrade the importance of IPM since our method becomes less efficient for small $\epsilon$. In our humble opinion, it seems promising to improve IPM by designing (1) the adaptive strategy; (2) the fast subproblem solvers as we have done for our method. 2. **Nothing precise is said on the per-iteration cost which is a crucial component of the practical efficiency. We can vaguely guess that it is $O(n^3)$ but it is stated nowhere.** We agree that the per-iteration cost would be $O(n^3)$ at worst case but argue that it can be much cheaper in pratice. Indeed, the $O(n^3)$ cost comes from exactly solving the $n \times n$ linear system (see Page 7, Line 211). In Page 7, Line 212-214, we stated that this system can be *efficiently* solved by conjugate gradient (CG) method or symmetric quasi-minimal residual (QMR) method. In our experiment, we use CG to approximately solve this linear system and set the maximum iteration number as 20. Empirically, the average number of CG steps is less than 5. Also, the implementation of our method can be improved by exploring the structure Q, A and T_k, e.g., sparsity, but we have not incorporate it yet. In contrast, the linear system solving at each IPM step becomes severely ill-conditioned as the barrier parameter decreases and the matrix factorization has to be done exactly to achieve high precision. To summarize, our method suffers from the same per-iteration cost as IPM at worst case but can be more flexible and efficient from a practical viewpoint. 3. **What is the dependency in the regularizers $\lambda_1$, $\lambda_2$ and more generally in the condition number? Do SSN methods also weakly depend on the conditioning of the problem?** The global rate of $O(1/\sqrt{T})$ is achieved since our method is at least as fast as the extragradient (EG) method for solving the min-max formulation (see Eq. (2.5)); indeed, our method alternates between EG and the regularized SSN method (see Line 232-238). We view Eq. (2.5) as a smooth and convex-concave min-max problem and know that the EG achieves the *optimal* last-iterate convergence rate of $O(1/\sqrt{T})$ (see Cai et al. [6, Theorem 3]). Such global rate depends on the smoothness parameter of Eq. (2.5) rather than the condition number of original formulation of Eq. (2.2). The explicit dependence on $\lambda_1$ and $\lambda_2$ is unknown since the results of Cai et al. [6] does not provide the dependence on all problem parameters. Nonetheless, our experiment has shown that our method behaves well when the sample size is medium (~500) which is sufficient for kernel-based OT estimation in most cases. Similar to Newton methods which are key ingredients for IPM, the SSN methods enjoy the weak dependence on problem conditioning; see *A nonsmooth version of Newton’s method*. In Appendix F, the proof of Theorem 3.4 gives the dependence on problem parameters. 4. **On the experimental side, it is claimed that the proposed method is faster. Yet which stopping criterion was used? Was it the same for both algorithms?** We apologize for the confusion we have created for not being specific. Indeed, we used the residue norm $\||R(w)\||$ (see Eq. (2.6)) as the measurement and terminated IPM and our method when $\||R(w)\||$ is below than the same threshold ($10^{-5}$). Notably, IPM can output the better solution than our method since the last iteration of IPM reduces $R(w)$ from $>10^{-5}$ to $\sim 10^{-7}$. However, the implementation from Vacher et. al. (2021) is based on short-step dual IPM and needs many iterations to reach $~10^{-4}$. In contrast, our method uses the adaptive strategy so needs much less iterations. Nonetheless, the IPM can be also improved using an adaptive strategy, e.g., Mehrotra's predict–correct rule, but this is beyond the scope of this paper. We thank you again for your detailed reading and your constructive input! We hope and trust that our replies have alleviated your concerns, and we look forward to an open-minded discussion if any such concerns remain. --- Rebuttal Comment 1.1: Title: Response Comment: I thank the authors for taking the time. I will also respond point by point. 1. Sinkhorn is more popular than vanilla OT not because of the 1/eps**2 number of steps (which I believe is actually 1/eps) but because of the n^2 cost per step. 2. The system l.212 -214 is most likely ill conditioned, hence in theory, CG requires many steps to converge. 3. There is probably a dependence in the regularizers and it should have been clearly stated in the text. Overall I feel like that the contribution is more empirical than theoretical and I wished it had been clearly stated in the article. --- Reply to Comment 1.1.1: Title: Thanks for reacting to our rebuttal Comment: Thanks for your prompt reply. We hope that with our answers below we can convince you further about the merits of our work. Please let us know if you have any other concerns, we will do our best to answer them. > **1. Sinkhorn is more popular than vanilla OT not because of the $1/\epsilon^2$ number of steps (which I believe is actually $1/\epsilon$) but because of the $n^2$ cost per step.** We do not think that your points are right. Indeed, "vanilla OT", i.e. the linear optimization algorithms (e.g. the network simplex method) can also achieve a per-iteration complexity of $n^2$ (see e.g. computational OT, Section 3.5.3, https://arxiv.org/abs/1803.00567). This is the effort required to check for a violating edge. To the best of our knowledge, the best bound on the iteration complexity is known as $1/\epsilon^2$ for Sinkhorn method and proved in the following paper: Computational optimal transport: Complexity by accelerated gradient descent is better than by Sinkhorn's algorithm in ICML 2018, https://proceedings.mlr.press/v80/dvurechensky18a.html. The improved bound of $1/\epsilon$ can be achieved by some other efficient methods, e.g., a gradient-based method (see the paper A direct $\tilde{O}(1/\epsilon)$ iteration parallel algorithm for optimal transport in NeurIPS 2019, https://proceedings.neurips.cc/paper_files/paper/2019/hash/024d2d699e6c1a82c9ba986386f4d824-Abstract.html) or a graph algorithm (see the paper A graph theoretic additive approximation of optimal transport in NeurIPS 2019, https://proceedings.neurips.cc/paper_files/paper/2019/hash/9b07f50145902e945a1cc629f729c213-Abstract.html). We would appreciate if you could provide the reference that proves the improved bound of $1/\epsilon$ for Sinkhorn method. > **2. The system l.212 -214 is most likely ill conditioned, hence in theory, CG requires many steps to converge.** We believe that the conditioning of linear systems will inevitably appear in Newton methods but this does not affect their value in both theory and practice. We have shown that our method was reliable in the experimental evaluation and CG works well (the preconditioning technique is used there). Compared to IPM, we find that our method based on semi-smooth Newton has better conditioning of linear systems in the experiment. > **3. There is probably a dependence in the regularizers and it should have been clearly stated in the text.** We would appreciate if you could clarify what you mean by "probably a dependence". Indeed, we have explained in the rebuttal why our method does not suffer from the small value of the regularizers. It would really help us if you could provide us with an example sentence that would clarify what you mean by "it should have been clearly stated in the text". > **4. Overall I feel like that the contribution is more empirical than theoretical and I wished it had been clearly stated in the article.** It is worth noting that our paper has indeed a practical purpose, hence it does blend both aspects. Our goal is efficiency, to open up new research directions exploiting this kernel-based OT approach, as seen e.g. in our abstract. *In this paper, we propose a nonsmooth equation model for kernel-based OT estimation and show that it can be efficiently solved via a specialized semismooth Newton (SSN) method. Indeed, by exploring the special problem structure, the per-iteration cost of performing one SSN step can be significantly reduced in practice. We also prove that our algorithm can achieve a global convergence rate of $O(1/\sqrt{k})$ and a local quadratic convergence rate under some standard regularity conditions. Finally, we demonstrate the effectiveness of our algorithm by conducing the experiments on both synthetic and real datasets.*
Summary: The authors focus on the problem of approximating OT numerically. They focus on one approximated version of OT which leverages a Sum of Squares approximation to stratify both statistical guarantees and computational amenability. While the first proposal to solve this SoS approximation relied on interior point methods, the authors focus on a semi-smooth Newton method. It consists in considering KKT optimality as some equation $R(w)=0$ and solve this equation using Newton updates. They derive the algorithm in this specific OT setting, and prove convergence guarantees and rates of their methods. They show experiments on synthetic data to see the approximation impact, and compare with interior point methods. Strengths: This recent OT formulation satisfying statistical guarantees and being computationally amenable is an interesting quantity to estimate. The proposal of the authors to propose another algorithm to estimate it and scale it to larger measures would increase the interest of this formulation to practitioners. Weaknesses: *The introduction is not precise enough* - Line 25, the rate $O(n^{-1/2d})$ is actually worse than the original rate. I think the authors meant a rate $O(n^{-2/d})$. - This is a secondary remark, but Line (31,32) another approach which attempts to regularize OT and ease computation is to consider mini batches of input data. I mention the work [FZFGC] and references therein if the authors wish to complement their introduction review. - The citation [44] in your paper is irrelevant. It focuses on estimating the OT Monge map when it exists, which is not the problem of estimating the cost, which you consider. Also, saying ‘a specific […] estimator’ is a super vague formulation which should be made precise. - Do the authors have references or precise rates to defend the assertion line 45-47 that « interior-point method is well known to be ineffective […] as the sample size increases » ? Similarly, do the authors have references that semi-smooth Newton method have better convergencce/scaling guarantees ?
 - I do not understand the sentence « While there is an ongoing debate in the OT literature on the merits of computing the plug-in OT estimators v.s. kernel-based OT estimators […] ». Which debates it is ? On which aspect does it especially focus ? This sentence is too vague to be insightful. - I do not understand the sentence Line 129 « kernel-based OT estimators are better when the sample size is small and the dimension is high ». Does that mean that the fewer samples we have, the better the approximation ? *The semi-smooth Newton method is not clear to understand*
 - Line 76, I think the authors should have introduced background knowledge on Semi-Smooth Newton methods instead of postponing them in the appendix. Furthermore, what is described by the authors is a review of previous contributions on this method, but no mathematical formulas are detailed. I would have put this part in the main body for related work, especially [33] which is exactly the same method as you, but for unregularized OT, and which you do not mention as related work. Lastly, to provide a self-contained and pedagogical description, I would have ideally wanted a brief description of SSN with a general framework, so that your work is an instantiation of this formulation. - I think Definition 2.1 is not extremely useful as it is the definition of optimality in a minimization program, and you can remove it. - Something that is not clear for me is whether some matrices are symmetric or not. First the set $S^n_+$ usually represent symmetric, positive matrice, but I see no symmetry in Line 152. The projection over $S^n_+$ of Equation (3.1) is true if Z is symmetric (or X in you context), but I see nowhere that X is assumed to be symmetric (or proved to be symmetric through iterations). Line 192, you mention a Schur Complement trick to make the Jacobian symmetric, but when the matrix is asymmetric, there is no reason that the Schur complement is symmetric. All in all, the derivation of the method seems unclear and ill-posed. Could you please clarify on this ? - Could you please define a quadratic rate of convergence using an equation ? *Some experimental improvements to suggest* You reproduce the experiments from [59], which is good to establish a comparison. However I think it could be made much clearer with some modifications. - In Figure 2, I would be interested in seeing the point wise difference between $c - \hat{u} - \hat{v}$ and $c - u_*-v_*$. It would emphasize where the approximation is best performed using this estimator. Reproducing the same experiment using interior point method would be insightful. - I don’t understand how time is estimated in Figure 3. Do you report the time to do a given number of iterations ? Is it the time to reach a given level of accuracy ? Without this I cannot make sure the comparison is fair. - I think that reproducing Figure 6 from [59] would be insightful. My main question is that you focus on time and approximation error, but I would like to see the statistical error estimation as the number of samples grow. Reproducing this Figure (and comparing with interior point) would illustrate that your computational approach maintains the favorable statistical properties of this OT estimator. [FZFGC] Fatras, K., Zine, Y., Flamary, R., Gribonval, R., & Courty, N. (2019). Learning with minibatch Wasserstein: asymptotic and gradient properties Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See my questions above. At the moment I advocate for rejection because I think the paper needs a significant amount of clarification w.r.t. their contributions, background and related work, such that I would not recommend publication in such state. However, I may have misunderstood parts of the paper, and I hope the authors will clarify this by answering my questions. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The authors adressed the societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and your input. We hope that our answers below will convince you about the merits of our work. We answer your questions below one-by-one, and have included these discussions in a revised version of our paper. 1. **The rate $O(n^{-1/2d})$ should be a rate $O(n^{-2/d})$.** Fixed. 2. **Another approach that attempts to regularize OT [...] is to consider mini batches of input data, e.g., [FZFGC].** We have included the reference in our intro. 3. **[44] is irrelevant.** Because both Vacher [38] and our method yield dual potential **function** estimators, they can also produce OT map estimators using the Brenier formula (e.g. Eq. 44 in [38]), as used in Fig. 2. Hence, we believe [44] is a natural reference, but we will clarify. 4. **Any references or precise rates that defend line 45-47? Any references that semi-smooth Newton method has better convergence/scaling guarantees?** The reference that defends line 45-47 best is [38] from the same authors of [59]. They claimed in Page 11-12: *This method has two drawbacks: first, its cost is prohibitive when the number of samples becomes large, which is necessary to ensure better statistical approximation, and second, $\ldots$*. The drawback of short-step IPM was mentioned in *Interior-point methods* by Potra and Wright. The semi-smooth Newton method has better scaling guarantee for solving many problems [30,33,45,61,64,65,67,68], where [33] showed its power of solving large plug-in OT problem. 5. **I do not understand *While there is an ongoing debate in the OT literature on the merits of computing the plug-in OT estimators v.s. kernel-based OT estimators* Which debates it is ? On which aspect does it especially focus?** We will clarify this sentence. Plug-in estimators (e.g. LP based, Sinkhorn, or mini-batch) focus on the W distance. Kernel-based OT estimators estimate sufficiently smooth dual-potential **functions**. The "ongoing" debate refers to whether, to estimate the W distance, it might be better to "only" compute the objective of a (regularized) discrete problem, or to use samples to estimate dual (continuous) **functions**, and evaluate them on data. Plug-in OT estimators suffer from the curse of dimensionality, but are tractable for large $n$. In contrast, kernel-based OT estimators yield dimension free estimates, but solve a very costly conic optimization problem, which has only been approached using short-step IPM [38]. This motivates our more efficient method. 6. **I do not understand*kernel-based OT estimators are better [...] the fewer samples we have, the better the approximation?** We meant that kernel-based estimators are very efficient statistically speaking (dimension free rate), but are intractable for large sample sizes. Therefore, kernel-based OT estimators will be relevant when sample size $n$ is small (estimator is still tractable) and dimension $d$ is large (statistical rates are $O(n^{-2/d})$ and $O(n^{-1/2})$ for plug-in and kernel-based estimators, respectively). 7. **The authors should introduce background knowledge on SSN methods and put the review of previous contributions on this method in the main body, especially [33] which is exactly the same method as you, but for unregularized OT.** Due space constraints, we only gave a brief introduction to SSN methods for the broad NeurIPS audience, to focus in the main text in a clear presentation of our algorithm and results. Following your suggestion, we will include a general introduction to SSN in the appendix and highlight the differences between [33] and our work. If more space is allowed, we will move some of it back to the main text. 8. **Definition 2.1 is not extremely useful as it is the definition of optimality in a minimization program, and you can remove it.** Fixed. 9. **Something that is not clear for me is whether some matrices are symmetric or not.** We apologize for the lack of clarity. Indeed, $X$ is *symmetric* and positive semidefinite since it is defined as the dual variable for the constraint $\sum_{i=1}^n \gamma_i\Phi_i\Phi_i^\top + \lambda_1 I \succeq 0$; see Line 150. In the revised version, we rewrite $S_+^n = \\{X \in \mathbb{R}^{n \times n}: X^\top = X, X \succeq 0\\}$ and $X \in S_+^n$ instead of $X \succeq 0$ throughout our paper. 10. **Could you please define a quadratic rate of convergence using an equation?** We recall the residue norm $\||R(w)\||$ (see Eq. (2.6)) and define a quadratic rate as $\||R(w_{k+1})\|| \leq C\||R(w_k)\||^2$ for some constant $C > 0$. 11. **Figure 2: the pointwise difference between $c-\hat{u}-\hat{v}$ and $c-u_\star-v_\star$. It would emphasize where the approximation is best performed using this estimator.** This is a great idea, we will present this in the paper. 12. **Figure 3: the experimental setup for reporting time.** We used the residue norm $\||R(w)\||$ as the measurement and terminated IPM and our method when $\||R(w)\||$ is below than the same threshold ($10^{-4}$). 13. **Figure 6 from [59]: statistical error estimation rather than time and approximation error.** We agree that the statistical properties are worth investigating and reproduce Fig. 6 from [59] using our method and IPM. Both figures are almost indistinguishable, since both methods solve the same problem, and output sufficiently accurate solutions. We also would like to argue that the discovery of efficient computational methods often precedes other advances (applied or statistical). These are two distinct and complementary subjects. In our humble opinion, the contribution of our paper is computational, and studying computational aspects for kernel-based OT estimators with theoretical guarantee is necessary. We refer you to [38, 59] for the details on statistical properties. We thank you again for your detailed reading and your constructive input! We hope and trust that our replies have alleviated your concerns, and we look forward to an open-minded discussion if any such concerns remain. --- Rebuttal Comment 1.1: Title: Answer to Rebuttal Comment: Dear Authors, I thank you for your rebuttal which clarified many points where I was thinking the formulation was too vague. In the revision you must clarify that you talk both about *statistical* and *computational* complexities, and that for now, the kernel or plug-in approach only enjoys a reasonable complexity on either one of these aspects. This would be much clearer for the sentences where I thought it was unclear. I would also insist on being self-contained on SSN methods, then instantiate the kernel-OT case inside a theorem. This would help understanding the principle of the method while not focusing on cumbersome notations which appear during the derivation of the OT setting. I suppose you have your reasons for processing in such manner, but it personnally complicated my understanding, given the ambiguity of variables which were either symmetric or not. Another question on which the authors can answer during this discussion is: Why can't we scale to more than 500 samples ? Compared to entropic OT which scales 'reasonnably' for 10^4 samples on GPUs, this seems very limiting, and you do not seem to solve this issue. Could you contextualize in the paper if it could be solved, or if it is due to solving a different approximation of OT problem ? Side remark which I noticed from checking your paper, but entropic OT does not suffer curse of dimensionality, but like kernel-OT, the constant might depend exponentially in the dimension, see e.g. [1]. In [44] and lines 41-42, it is just that the estimator is ill-posed, and is unable to exploit the regularity to break such curse of dimensionality to estimate a Monde map. For this reason I decide to increase my score. However, I would not personnaly advocate for a complete acceptance, because the requested modifications might need a new reviewing process due to their importance. [1] Genevay, A., Chizat, L., Bach, F., Cuturi, M., & Peyré, G. (2019, April). Sample complexity of sinkhorn divergences. In The 22nd international conference on artificial intelligence and statistics (pp. 1574-1583). PMLR. --- Reply to Comment 1.1.1: Title: Many thanks for answering our rebuttal before the deadline Comment: We are very grateful for your timely response. Here are a few more answers: > **In the revision you must clarify that you talk both about statistical and computational complexities [...] This would be much clearer for the sentences where I thought it was unclear.** Yes, we heard you on this, and we will emphasize more strongly this trade-off, starting with the abstract, that we can change as *Recent works suggested that kernel-based OT [...] practice. In this paper, we propose [...] To *Recent works suggested that kernel-based OT estimators are more statistically efficient than plug-in OT estimators when comparing probability measures in high-dimensions [59]. However, this comes at a very steep computational price. These estimators are very costly, since their computation relies on the short-step interior-point method for which the required number of iterations is known to be large in practice. To improve on the scalability of these approaches, we propose in this paper a nonsmooth equation model for kernel-based OT estimation and show that it can be efficiently solved via a specialized semismooth Newton (SSN) method.* and more generally amend the introduction and background sections accordingly. > **I would also insist on being self-contained on SSN methods, then instantiate the kernel-OT case inside a theorem [...]** We agree you, and we will provide in Section 2.3 a self-contained introduction to SSN methods that will be about half a page. > **[...] Why can't we scale to more than 500 samples ? Compared to entropic OT which scales 'reasonably' for 10^4 samples on GPUs, this seems very limiting, and you do not seem to solve this issue. Could you contextualize in the paper if it could be solved, or if it is due to solving a different approximation of OT problem ?** This is indeed due to the fact that kernel-OT targets a completely different approximation of OT problems: - kernel-based OT solvers target a **functional** optimization problem, i.e. their solutions are directly dual potential functions that agree with prior smoothness assumptions (line 130). - by contrast, the Sinkhorn algorithm is a discrete solver that computes a transport **matrix**, or dual potential **vectors**. While its outputs have been recently used to recover dual potential functions (as in [Pooladian+21]), this is mostly an interpolation, a smooth c-transform of pointwise potential values, inspired by semi-discrete OT, and not the result of a functional optimization (as done with RKHS in kernel based OT). The approach by Vacher et. al was illustrated on a maximal number of 200 points. We propose experiments with 500 points. Our solver can scale reasonably (about 20 seconds) for 1000 points, but the IPM solver of Vacher does not, we will add this to the curves. While we agree this is still a bit small, it does start to open up some possibilities, using e.g. mini-batch OT. We will discuss this explicitly in the paper, and expand on Remark 2.2 > **Side remark which I noticed from checking your paper, but entropic OT does not suffer curse of dimensionality, but like kernel-OT, the constant might depend exponentially in the dimension, see e.g. [1]. [...]** A discussion on entropic OT vs. kernel-OT is provided in the **State of the art** section of Vacher et al [59]. We can add our summary of this: While the entropic OT rate you mention does indeed gives a $1/\sqrt{n}$ statistical dependency, this is only valid for a **fixed** regularization $\varepsilon>0$ level (i.e. statistical complexity assumes **regularized OT** between densities is the target ground truth). However, because that constant is dimensionality dependent, it will blow up exponentially fast to infinity as $\varepsilon\rightarrow 0$, if one wants to approximate the **non-regularized OT**. In that sense, entropic OT does not provide a dimension-free (event w.r.t. constants) way to compute non-regularized OT, and, more qualitatively, entropic OT can only make sense statistically for high regularizations (constant degrades exponentially fast). As $\varepsilon \rightarrow \infty$, one recovers the MMD complexity. By contrast, Vacher et al's show that kernel OT does not suffer from such a blow-up. While their constants do depend exponentially in $d$, they are **fixed**, and the rate in $1/\sqrt{n}$ to target non-regularized OT is valid. > **For this reason I decide to increase my score. However, I would not personnaly advocate for a complete acceptance, because the requested modifications might need a new reviewing process due to their importance.** We are very grateful for your score increase. We believe that the modifications you have requested only target the background section, only with clarifications (Vacher’s methods computational/statistical tradeoff + brief background on SSN). We do not need to add new material or original contributions to satisfy your requests. We humbly request your trust on carrying out these modifications.
Summary: This paper focuses on investigating kernel-based optimal transport estimation. The approach involves reformulating the problem as a nonsmooth equation model and utilizing the semismooth Newton method to solve it. The study demonstrates that the associated residual mapping exhibits **strong semismooth** properties, ensuring the applicability of the semismooth Newton method. Additionally, it is verified that the subproblem within the semismooth Newton method is well-defined, as it is equivalent to solving an invertible symmetric linear system. Finally, the proposed algorithm is supported by both theoretical guarantees, including global and local rates, and numerical experiments that highlight its superiority. Strengths: 1. The algorithm is highly practical and can be easily implemented. The paper provides clear instructions on solving the subproblem and updating the parameters, making it accessible for real-world applications. 2. The theoretical investigation is rigorous and well-founded. The authors define a suitable residual function and present both global and local convergence rates of the proposed semismooth Newton algorithm. 3. The numerical experiments provide compelling evidence of the algorithm's efficiency compared to existing methods. The results showcase the superior performance and computational advantages of the proposed approach, reinforcing its practical relevance and effectiveness. Weaknesses: 1. The global convergence rate of the proposed algorithm is dependent on an auxiliary sequence of iterates, which adds extra computational complexity to the algorithm. It would be helpful to provide further clarification in line 238 regarding whether the condition $$w_{k+1}=v_{k+1}$$ always holds. If so, the proposed algorithm will reduced to extragradient method. 2. To show the power of semismooth Newton steps, the proposed algorithm should be compared with the pure extragradient method in numerical experiments. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How to choose the hyperparameters $\alpha_1,\alpha_2$, and $\kappa$ etc. in Algorithm 2? 2. Is there any intuition to use the adaptive strategy (3.4)? 3. In the proof of Theorem 3.4, the auxiliary sequence $\{v_k\}$ is not considered. It seems that the strategy in line 238 cannot be neglected and the case $w_{k+1}=v_{k+1}$ needs to be precluded under the conditions of Theorem 3.4. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See weakness and questions for further details. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your encouraging comments and positive evaluation! We reply to your main questions point-by-point below and have included these discussions in the revised version of our paper. 1. **The global convergence rate of the proposed algorithm is dependent on an auxiliary sequence of iterates, which adds extra computational complexity to the algorithm. It would be helpful to provide further clarification in line 238 regarding whether the condition $w_{k+1} = v_{k+1}$ always holds. If so, the proposed algorithm will reduced to extragradient method.** We agree that computing the auxiliary sequence results in extra cost but argue that such cost is less than that of performing one-step regularized SSN. In our experiment, we also find that the main iterates are mostly generated by regularized SSN steps and the whole algorithm converges at a superlinear rate (see Page 8, Line 239-240). Thus, we can compute such auxiliary sequence at the initial stage and then only perform the regularized SSN steps. We claim that $w_{k+1} = v_{k+1}$ will not always holds. In Page 8, Line 240-243, we stated that, if the initial point is sufficiently close to one nondegenerate optimal solution, the regularized SSN method can achieve the quadratic convergence rate as shared by other SSN methods in the existing literature [35, 18, 1] (see also Theorem 3.4). In other words, if the current iterate $w_k$ is sufficiently close to one nondegenerate optimal solution, the regularized SSN step achieves a quadratic rate (like the second-order method, e.g., Newton method) while the EG step only achieves a linear rate (the EG method is the first-order method). This implies that the regularized SSN step can reduce the residue norm more than the EG step and $w_{k+1} = v_{k+1}$ will not hold. Since Theorem 3.4 guarantees the existence of such local region where the regularized SSN step outperforms the EG step, it suffices to stop computing the auxiliary sequence after the iterates enter the local region and perform the regularized SSN steps. This supports the use of early stopping strategy as mentioned above. However, we remark that the implementation is nontrivial since it is difficult to check whether or not the iterates enter the local region in practice. If we stop computing such auxiliary sequence too early, our method is likely to diverge. 2. **To show the power of semismooth Newton steps, the proposed algorithm should be compared with the pure extragradient method in numerical experiments.** We agree that it would be better to compare our method with the pure EG method but hope to mention that the power of regularized SSN steps has been partially shown in our experiment. In Page 8, Line 239-240, we stated that, in our experiment, we find that the main iterates are mostly generated by regularized SSN steps and the whole algorithm converges at a superlinear rate. Following your suggestion, we conduct the experiment and the preliminary results show that the pure EG method outperform our method at the initial stage due to its relatively cheaper per-iteration cost but only output a low-accurate solution compared to our method (see the attached pdf file). 3. **How to choose the hyperparameters in Algorithm 2?** We apologize for the confusion we have created for not being specific. Indeed, we choose $\alpha_1=10^{-6}$, $\alpha_2 = 1.0$, $\beta_0 = 0.5$, $\beta_1 = 1.2$ and $\beta_2 = 5$ in our experiment. 4. **Is there any intuition to use the adaptive strategy (3.4)** The parameter $\theta_k$ is an important parameter to control the quality of SSN direction $\Delta w_k$. When $\theta_k$ is large, $\Delta w_k$ usually leads to a slow yet stable convergence. When $\theta_k$ is small, $\Delta w_k$ can be a bad SSN direction but the rate of convergence will be fast if $\Delta w_k$ is good. If $\frac{\rho_k}{\||\Delta w_k\||^2}$ is small, $\Delta w_k$ is usually a bad SSN direction and we increase $\theta_k$. Otherwise, we decrease it. 5. **In the proof of Theorem 3.4, the auxiliary sequence $v_k$ is not considered. It seems that the strategy in line 238 cannot be neglected and $w_{k+1} = v_{k+1}$ needs to be precluded under the conditions of Theorem 3.4**. Thank you for your insightful comments! Let us clarify why the current theoretical analysis does not need to preclude the case $w_{k+1} = v_{k+1}$ under the conditions of Theorem 3.4. The key ingredient is that we have assumed that the iterate $w_0$ is *sufficiently* close to one nondegenerate optimal solution. Then one regularized SSN step is guaranteed to achieve a quadratic rate. Since one EG step achieves a linear rate (it is a first-order method), we know that one regularized SSN step reduces the residue norm more than the EG step under the conditions of Theorem 3.4 (i.e., sufficiently close). This implies that $w_{k+1} = v_{k+1}$ can be precluded given that the iterate $w_0$ is *sufficiently* close to one nondegenerate optimal solution. We thank you again for your detailed reading and your constructive input!
null
null
Rebuttal 1: Rebuttal: **We would like to thank PCs, SACs, ACs and the reviewers for their efforts on evaluating our paper.** We appreciate that the reviewers pointed out the importance of the problem and of our algorithm given the increasing popularity of computational OT. All the comments will be addressed in the revised version of our paper, and we will release the source code if the paper is accepted. Besides the specific responses that we have provided to each reviewer, we follow the suggestion of Reviewer bQMP by comparing our method with the pure extragradient (EG) method. The preliminary results show that our method consistently outperforms the pure EG method and can output a high-accurate solution in terms of the residue norm. The experimental setup here is the same as that used in the main paper: we fix the dimension $d=5$ and vary the sample size $n \in \{50, 100, 200, 300, 400, 500\}$. For the pure EG method, we tune the stepsize and set it as $0.01$. Pdf: /pdf/fb0694f6c4e92ae97b1a40ba051bb33ff46c1397.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
On the Convergence of CART under Sufficient Impurity Decrease Condition
Accept (poster)
Summary: This paper improved the rate of consistency of CART based on a sufficient impurity decrease (SID) condition under regression settings. Then, the authors provided examples, which are mostly special additive models, that can satisfy the SID condition, and showed that the rate of consistency cannot be improved by more than a log factor. Strengths: The strengths of this paper are summarized as follows: - The "locally reverse Poincare inequality" shown in this paper is intuitive and can be more general than previous studies. - The results are intuitive and improved the results in Chi et al. 2022 when the noise is sub-Gaussian. Weaknesses: The weaknesses of this paper are summarized as follows: **About contributions** 1. The examples in this paper that satisfy the SID condition are still restricted to special additive models, which can be somehow not so practical. And it seems that extending the results to non-additive models is not easy. 2. The discussions about optimality in [Line 279-285] are not so strict. Note that the lower bound of order $\Omega(n^{-2 / (p + 2)})$ shown in Tan et al. 2022 is based on general additive models which may not satisfy the SID condition. In other words, when the SID condition satisfies, the lower bound may be better. Typos: - $\kappa$ in [Line 242] seems to be not defined and it seems to be $\tau$. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I would like to ask the authors the following questions: - Can the SID condition be satisfied in multi-dimensional settings without assuming additive models? See Weaknesses #1. - May the curse of dimensionality be avoided when the SID condition verifies? See Weaknesses #2. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your overall positive assessment of our work. Reply to weakness 1/ question 1: Thank you for this comment. Actually, it is possible to relax the additive model on $f^*$ such that it is ``approximately additive". More precisely, we can assume that there is an additive function $g^*$ that approximates $f^*$, such that in each rectangle A, the L2 distance between $f^*$ and $g^*$ is bounded by a small constant of the variance of $f^*$ on $A$. On the other hand, it seems hard to further relax this condition. Consider a two dimensional model $f^*(x_1,x_2)$ with features $X_1$ and $X_2$ independent. Suppose $E(f^*(X_1,X_2)) = 0$. We can decompose $f^*$ into additive components and a remaining term: \begin{equation} f^*(x_1,x_2) = f_1^*(x_1) + f_2^*(x_2) + h^*(x_1,x_2) \end{equation} where $f^*_1(x_1) := E(f^*(X_1,X_2) | X_1=x_1)$, $f^*_2(x_2) := E(f^*(X_1,X_2) | X_2=x_2)$ and $h^*(x_1,x_2) := f^*(x_1,x_2) - f_1^*(x_1) - f_2^*(x_2)$. Then it can be checked that \begin{equation} E (h^*(X_1,X_2) | X_1) = E (h^*(X_1,X_2) | X_2) = 0 \end{equation} If the additive components $f^*_1 + f^*_2$ are small, say, in the extreme case $f^*_1 + f^*_2$ is zero, then $f^* = h^*$, and by the inequality above we know that the SID condition cannot be satisfied. Therefore, for the SID condition to be satisfied, it is necessary that the function $f^*$ has a significant amount of signal that is additive. But of course, this is for the SID condition to be satisfied. It is possible that CART can still work well without the SID condition, which needs more exploration and is beyond the scope of this work. Reply to weakness 2/ question 2: Thank you for this comment. We will reword the discussion on optimality in line 279-285. Indeed, if we consider the class of functions satisfying SID condition with a fixed parameter $\lambda$, then the rate of Theorem 2.3 has an exponent with no explicit dependence dimension $p$. However, in multi-dimensional models, even for additive models satisfying the SID condition, the SID parameter $\lambda$ seems to have a dependence on $p$. Therefore it is not proper to claim that it avoids the curse of dimensionality. --- Rebuttal Comment 1.1: Comment: Thank you for providing such a comprehensive clarification. However, the dependency on the additive structure remains unrealistic, so I will keep my score. --- Reply to Comment 1.1.1: Comment: Thank you for your reply. We admit that the additive structure may look a little unrealistic. But still, we'd like to note that the additive structure assumption has also appeared in many existing recent literature analyzing CART: 1. Erwan Scornet, Gérard Biau, and Jean-Philippe Vert. Consistency of random forests. The Annals of 385 Statistics, 43(4):1716–1741, 2015. 2. Klusowski J. Sparse learning with CART[J]. Advances in Neural Information Processing Systems, 2020, 33: 11612-11622. 3. Klusowski J, Tian P. Large-scale prediction with decision trees[J]. Journal of the American Statistical Association, 2022. No existing work has shown the consistency of CART without making strong structural assumptions on the underlying model. On the other hand, the requirement for additive model assumption sheds light on the limitation of only using axis-aligned splits. Indeed, if skewed splits are allowed, the assumption on the underlying model can be significantly relaxed: 4. Cattaneo M D, Chandak R, Klusowski J M. Convergence rates of oblique regression trees for flexible function libraries[J]. arXiv preprint arXiv:2210.14429, 2022. Given that our major focus is on an analysis of CART under SID condition, we take the additive model as an example (as it appears in the literature) and use it to illustrate the applicability of SID condition. Of course, it can be an interesting (and challenging) question to relax these conditions in the future.
Summary: The paper studies the convergence rate of CART, a greedy algorithm for building decision trees, under a regression setting. It introduces a sufficient impurity decrease (SID) condition on the underlying function that ensures the consistency and polynomial convergence of CART. It also provides examples and sufficient conditions to verify the SID condition for various function classes, such as additive models, polynomials, and smooth and strongly convex functions. Strengths: 1. The paper provides a refined analysis of CART under the SID condition, which improves the prediction error bound over the previous work by Chi et al. (2022) and shows its optimality up to logarithmic factors. 2. The paper decodes the mystery of the SID condition by introducing a locally reverse Poincare (LRP) class of univariate functions and showing that additive functions with LRP components satisfy the SID condition. This connects the two types of assumptions in the literature: additive model and SID condition. 3. The paper demonstrates the practical utility of its results by providing examples and sufficient conditions to check the LRP property for various well-known function classes, such as polynomials, smooth and strongly convex functions, and strongly increasing functions. 4. The paper is well-written and clear, with detailed proofs and explanations in the appendix. It also includes simulations and figures to illustrate its main findings and compare them with existing results. Weaknesses: 1. The paper relies on some technical assumptions and conditions, such as bounded errors, bounded signal function, and LRP property, which may not hold or be easy to verify for some function classes. It would be helpful to provide more intuition and motivation for these assumptions and conditions, and discuss their implications and limitations for the applicability of the results. 2. The improvement of the convergence rate in this paper still heavily relies on the SID condition. Although the paper decomposes and illustrates the SID condition with some examples, they are still mathematical forms in 1-dimensional situations. If the paper could further analyze the SID condition in relation to the structure of real data, then its contribution to the SID condition would be more convincing. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Do Examples 3.1 - 3.3 still hold if they are generalized to high-dimensional situations? 2. The additive model assumption in Section 3 is still quite strong, is it possible to use milder assumptions? 3. The intuition behind the LRP condition needs further explanation, does this mathematical property have any implications for improving the CART algorithm? Maybe it can help explain why CART tends to handle data with certain structural characteristics. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NAN. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your overall positive assessment of our work. Reply to weakness 1: Thank you for this suggestion, we will add a discussion in the revised paper. Reply to weakness 2 and question 2: Thank you for this comment. Actually, it is possible to relax the additive model on $f^*$ such that it is ``approximately additive". More precisely, we can assume that there is an additive function $g^*$ that approximates $f^*$, such that in each rectangle A, the L2 distance between $f^*$ and $g^*$ is bounded by a small constant of the variance of $f^*$ on $A$. On the other hand, it seems hard to further relax this condition. Consider a two dimensional model $f^*(x_1,x_2)$ with features $X_1$ and $X_2$ independent. Suppose $E(f^*(X_1,X_2)) = 0$. We can decompose $f^*$ into additive components and a remaining term: \begin{equation} f^*(x_1,x_2) = f_1^*(x_1) + f_2^*(x_2) + h^*(x_1,x_2) \end{equation} where $f^*_1(x_1) := E(f^*(X_1,X_2) | X_1=x_1)$, $f^*_2(x_2) := E(f^*(X_1,X_2) | X_2=x_2)$ and $h^*(x_1,x_2) := f^*(x_1,x_2) - f_1^*(x_1) - f_2^*(x_2)$. Then it can be checked that \begin{equation} E (h^*(X_1,X_2) | X_1) = E (h^*(X_1,X_2) | X_2) = 0 \end{equation} If the additive components $f^*_1 + f^*_2$ are small, say, in the extreme case $f^*_1 + f^*_2$ is zero, then $f^* = h^*$, and by the inequality above we know that the SID condition cannot be satisfied. Therefore, for the SID condition to be satisfied, it is necessary that the function $f^*$ has a significant amount of signal that is additive. But of course, this is for the SID condition to be satisfied. It is possible that CART can still work well without the SID condition, which needs more exploration and is beyond the scope of this work. Reply to question 1: Thanks -- these are great questions. We believe that Examples 3.1 and 3.2 still satisfy the SID condition in multi-dimensional settings (for Example 3.1 we assume strict monotonicity in all the dimensions). For Example 3.3 we are not sure, as multi-variate polynomials may have some symmetric structure (like the XOR gate) that makes a single axis-aligned split fail to work. Reply to question 3: Thanks for this helpful comment. LRP condition indicates that CART can handle the signal in which there is a significant local variability (characterized by the derivative). We will add some comments in the revised version. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. However, since the conditions on which the results of this paper rely still require more explanation, I will keep the current score. Nevertheless, I would still like this paper to be accepted by the conference.
Summary: The paper performs theoretical analysis of the well-known CART algorithm. The authors show a convergence rate of CART under the condition called 'sufficient impurity decrease' (SID), which is tighter than known ones. The authors further provide the condition for a class of functions that satisfies SID. Strengths: - The paper seems technically sound and the analysis of CART would be important. - The paper elaborately explains differences from past work in particular from [10]. Weaknesses: - Experimental verification is only in one quite simple figure for the simplest case. Of-course, the paper is a theoretical paper, but if richer empirical evidences had been provided, it would have been convincing more (such as different true functions and comparison with past bounds). Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Is there any insight that can be derived by the proposed analysis for tree ensemble models such as random forest? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: A limitation about lambda is discussed in Section 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your overall positive assessment of our work. Reply to weakness 1: Thank you for this suggestion. The reason that we only did the simulation for linear functions is that for other signal functions, it is hard to precisely evaluate the SID parameter $\lambda$, hence difficult to set up a fair baseline for numerical evaluation. We would like to do more explorations in this direction in future works. Reply to question 1: Thank you for this question -- it immediately leads to considerations for future works. By a similar argument as in [10], we can immediately derive an error bound for regression random forests. But on the other hand, this error bound is at essence for a single tree and does not make use of the ensemble structure. If the structure of ensembling is properly analyzed, it is possible to derive a stronger error bound than the one given in this paper.
Summary: The paper focuses on the analysis of the prediction error of Classification and Regression Trees (CART) for regression problems under a sufficient impurity decrease (SID) condition. The SID condition is a strong assumption on the approximation power of tree splits, which can ensure the consistency of CART. The paper establishes an upper bound on the prediction error of CART under the SID condition and shows that the error bound is tight up to log factors. The paper also discusses a few sufficient conditions under which an additive model satisfies the SID condition. The paper's first contribution is a refined analysis of CART under the SID condition, which improves upon the known result by providing a tighter error bound. The paper's second contribution is the decoding of the mystery of the SID condition, which builds a connection between the two types of assumptions in the literature: the additive model and the SID condition. The paper discusses a few examples of how the locally reverse Poincare inequality can be verified. Overall, the paper provides a refined analysis of CART under the SID condition and sheds light on the connection between two types of assumptions in the literature: additive model and SID condition. Strengths: The paper has several strengths. First, it provides a refined analysis of the prediction error of Classification and Regression Trees (CART) under a sufficient impurity decrease (SID) condition for regression problems. The paper establishes an upper bound on the prediction error of CART under the SID condition and shows that the error bound is tight up to log factors. Second, the paper discusses a few sufficient conditions under which an additive model satisfies the SID condition. Third, the paper provides examples of how the locally reverse Poincare inequality can be verified. Weaknesses: I think the work is significant. I don’t have many concerns, but I have some questions about the method. 1. Can this method be extended to classification problems? 2. In theorem 2.3, why do you need to suppose n satisfies such a complicated condition? 3. Also, in theorem 2.3, why should d be fixed to a function of data size? 4. In line 32, is it [0,1]^p? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Referred to Strengths and Weaknesses Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Referred to Strengths and Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your overall positive assessment of our work. Reply to weakness 1: Thanks for this great question. It depends on the specific version of classification trees under discussion and the loss used to make splits and measure the accuracy. If the prediction in each node is the majority votes and the loss is the 0-1 loss, then the current analysis does not apply, because the impurity decrease considered in this paper is specific for square loss. If the prediction in each node is the empirical probability of lying in a class, and the loss is continuous with curvature (e.g. cross-entropy), then it is possible to generalize some of the analysis for classification. Reply to weakness 2: Thanks for this question. The condition on n is needed for some uniform bound between the empirical and population means of some quantities. This condition is mild in this context: If $\bar{\theta}$ and $\delta$ are constants, then this condition only requires that n is large enough with $dlog(np)/n \le O(1)$. Reply to weakness 3: Thanks for this question. Note that the error bound in (10) holds true for an arbitrary value of d, while the error bound in (11) is true only when $d \sim log_2(n)$. We take $d \sim log_2(n)$ simply to minimize the RHS of (10) and arrive at the error bound in (11). This value increases as $n$ increases, because when more data is available, a larger model can be used to decrease the bias without significantly increasing the variance. Reply to weakness 4: Thank you. We will correct it in the revision.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this paper, the approximation of the model obtained from the decision tree learning algorithm CART is analyzed. More precisely, in Theorem 2.3, a convergence rate of the decision tree approximation compared to the true model is obtained. This analysis is performed under two assumptions of the paper (already introduced in the literature), with the Sufficient Impurity Decrease (SID) assumption in particular. Finally, examples of true models that satisfy the SID assumption are proven. Strengths: 1. The paper is well-written and easy to follow (although I did not check the proofs). 2. The analysis of Theorem 2.3 improves the result (i.e., the convergence rate) of [10]. 3. Proposition 3.1 links the SID condition and the class of functions satisfying the Locally Reverse Poincaré condition (that is proven for some classes). Weaknesses: 1. The additive structure assumption of $f^*$ seems restrictive (but is also assumed in previous works). Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Typos and minor comments: - l38: the the -> the - l104: Chi et al.[10] -> Chi et al. [10] - l144: the notations $\bar{y}\_{\mathcal{I}\_A}$, $\bar{y}\_{\mathcal{I}\_{A\_L}}$ and $\bar{y}\_{\mathcal{I}\_{A\_R}}$ are not introduced - l163: "as" -> "at" - l193: "Assumptions‘2.1" -> "Assumptions‘ 2.1" - l240/246/247: Poincare -> Poincaré - l242: $\kappa$ -> "$\tau$" - l704: there is an indefinite reference "??" - The references are not "standardized", e.g. for JMLR it is written "The Journal of Machine Learning Research" or "Journal of Machine Learning Research" Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The limitation about the the SID condition is discussed in Section 5. Moreover, I do not see any potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your overall positive assessment of our work. Reply to weakness 1: Thank you for this comment. Actually, it is possible to relax the additive model on $f^*$ such that it is ``approximately additive". More precisely, we can assume that there is an additive function $g^*$ that approximates $f^*$, such that in each rectangle A, the L2 distance between $f^*$ and $g^*$ is bounded by a small constant of the variance of $f^*$ on $A$. On the other hand, it seems hard to further relax this condition. Consider a two dimensional model $f^*(x_1,x_2)$ with features $X_1$ and $X_2$ independent. Suppose $E(f^*(X_1,X_2)) = 0$. We can decompose $f^*$ into additive components and a remaining term: \begin{equation} f^*(x_1,x_2) = f_1^*(x_1) + f_2^*(x_2) + h^*(x_1,x_2) \end{equation} where $f^*_1(x_1) := E(f^*(X_1,X_2) | X_1=x_1)$, $f^*_2(x_2) := E(f^*(X_1,X_2) | X_2=x_2)$ and $h^*(x_1,x_2) := f^*(x_1,x_2) - f_1^*(x_1) - f_2^*(x_2)$. Then it can be checked that \begin{equation} E (h^*(X_1,X_2) | X_1) = E (h^*(X_1,X_2) | X_2) = 0 \end{equation} If the additive components $f^*_1 + f^*_2$ are small, say, in the extreme case $f^*_1 + f^*_2$ is zero, then $f^* = h^*$, and by the inequality above we know that the SID condition cannot be satisfied. Therefore, for the SID condition to be satisfied, it is necessary that the function $f^*$ has a significant amount of signal that is additive. But of course, this is for the SID condition to be satisfied. It is possible that CART can still work well without the SID condition, which needs more exploration and is beyond the scope of this work. Reply to questions: Thanks! We will correct them in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for your answer and your clarifications. I will keep my score unchanged.
null
null
null
null
null
null
Students Parrot Their Teachers: Membership Inference on Model Distillation
Accept (oral)
Summary: Multiple previous works have proposed knowledge distillation techniques to distill the knowledge of a teacher trained on sensitive data into a student model which is supposedly protected against membership inference attacks. This paper proposes a new membership inference attack to perform membership inference attacks on the student model. Attacking the student models, it is shown that knowledge distillation provides only limited privacy against membership inference attacks. Having access to the dataset used to train the student model but not the student model itself, it is shown that this suffices to attack the private data of the teacher model. Strengths: - **Originality:** The paper is novel and proposes a novel approach for membership inference attacks on student models. - **Quality:** The experimental results seem sound and the claims of the paper are well supported. - **Clarity:** Overall, the paper is mostly well written and well organized. - **Significance:** With previous work having proposed knowledge distillation as a defense technique against membership inference, the results of the paper are important and most likely of high impact to other researchers. Weaknesses: **Clarity:** - The setting and the assumptions made for the proposed attacks are not very clear, and it would be helpful for the reader to explain the assumptions and information available to the attacker in more detail. - unfortunately, the code used to run the experiments will not be made public, which hinders reproducibility Misc: - Line 94: "mitigates prevent". There seems to be some missing or additional word. - maybe you could mention in the captions that “+” in the plots means the positive correlation. Otherwise, it is a bit confusing, if the reader is searching for a blue plus in the plots. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: **Q1:** The per example membership inference attack success rate is calculated over 1000 models. However, how do these models differ? Were they trained on the same data but with different seeds? If so, aren't these models behaving very similar? **Q2:** In line 241 the indirect attack is described. However, the procedure of the attack is not quite clear to me. As far as I understood, the attacker has no knowledge about the teacher model's predictions on the teacher examples (line 231). Could you clarify what the assumptions made for this attack are and on how the attacker is fitting the Gaussians in this case? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: - the limitations are appropriately addressed in the supplementary material Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and effort in reading our submission and writing your review! >**Line 94:** Sorry about this! Actually, every point is a small blue plus sign! We will explain this clearly in the caption. >**Q1:** We follow the evaluation strategy used to evaluate LiRA, where each model is trained on a different 50% split of the dataset. That ensures these models differ in their membership inference behavior. >**Q2:** The LiRA strategy trains many shadow models on a random 50% subsample of the teacher set. As a result, each teacher example will be present in roughly half of the shadow models, and will not be present in the other half. Then if we train 100 shadow models, we can look at their predictions, on each student example, of the ~50 shadow models containing the teacher, as well as the predictions of the ~50 shadow models without the teacher. We can do this for each student query, because we assume here the adversary sees the student query outputs (but cannot query on arbitrary points, which would be necessary to observe the model outputs for the teacher examples). This attack has the additional assumption that the adversary knows the student queries, as all of our End-to-End attacks do, but otherwise has no extra assumptions beyond LiRA’s. Let us know if this explanation helps, and we can add it to the paper, or help clarify more. --- Rebuttal Comment 1.1: Title: Answer Rebuttal Comment: Thank you for your rebuttal. All my questions were appropriately addressed, and I will raise my score from 7 --> 8 to `Strong Accept`.
Summary: The paper examines the effectiveness of model distillation in protecting the privacy of training data. Through the use of membership inference attacks, the authors demonstrate that distillation alone provides limited privacy across various domains. The authors also suggest several design considerations for improving privacy in model distillation, such as deduplicating the teacher set and considering complementary techniques like differentially private training. Strengths: - Trendy topic - Well-organized paper - Extensive experiments Weaknesses: - Attack detail in Section 5 is not clear - Additional discussion might be helpful Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: This paper provides valuable insights by addressing the misconception that model distillation can effectively protect the privacy of training data. The paper is well-organized and easy to follow, and a thorough examination of potential influencing factors contributes to the strength of the research. However, I have some concerns regarding the details of the attacks and the interpretation of results: - Regarding the experimental setting for End-to-End LiRA, it would be helpful to clarify how the corresponding student dataset was chosen. This information would enhance the reproducibility and validity of the experiments. - The paper mentions that attacks on the student model should not be more powerful than those on the teacher model, as implied by the data processing inequality. However, Figure 3 reveals that some samples remain more vulnerable after distillation. It would be beneficial to provide a possible explanation for this observation, as it appears to contradict the expected outcome. - Section 5, which introduces a new attack that only utilizes the student query dataset, is difficult to understand. The authors claim that “for each teacher example $z^T_j$, we fit a Gaussian distribution to the logits of each student example $z^S_i$, when $z^T_j$ is either IN or OUT.” However, it is unclear how the authors establish a link between student examples and arbitrary teacher examples, and how the student examples can indicate the membership status of the teacher example. Further clarification and additional illustrations would greatly enhance understanding in this section. Overall, this paper makes significant contributions and offers valuable insights. Addressing the aforementioned concerns would further strengthen the research and improve the clarity of the presented findings. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors have adequately claimed their limitations in Appendix B. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time in reading the submission and writing the review! >**End-to-End LiRA:** We use the same student set as used to train the target student model. This is because distillation can be done on public, nonsensitive data. >**Figure 3 & Data Processing Inequality:** The data processing inequality bounds the information contained in the student model about the data’s membership on the teacher. Thus, this only bounds the performance of an optimal attack, which LiRA on the teacher model may not be. We also note that the x and y coordinate of each point in Figure 3 is the average over a sample of target models and so are noisy values that can randomly appear above the dotted line. >**Clarification for Section 5:** The LiRA strategy trains many shadow models on a random 50% subsample of the teacher set. As a result, each teacher example will be present in roughly half of the shadow models, and will not be present in the other half. Then if we train 100 shadow models, we can look at their predictions, on each student example, of the ~50 shadow models containing the target teacher example, as well as the predictions of the ~50 shadow models without the target teacher example. Let us know if this explanation helps, and we can add it to the paper, or help clarify more. --- Rebuttal Comment 1.1: Title: Thank you for the clarification Comment: Thank you for the clarification. My questions were appropriately addressed!
Summary: The paper explores the privacy implications of model distillation, a technique used to transfer knowledge from a teacher model to a student model. The authors investigate membership inference attacks on both the teacher and student training sets to evaluate the privacy provided by distillation. The authors extend the LiRA attack to the distillation setting in two ways. The first attack calibrates the attack only on the teacher models and the second attack uses the whole distillation procedure. They find that distillation alone provides limited privacy protection. The attacks on distilled models succeed even though the distilled models have never seen the teacher's data directly. Finally, they also highlight the importance of considering factors such as data duplication, teacher set poisoning, temperature scaling and distribution shifts when evaluating the privacy risks of distillation. Strengths: - The authors address an important problem. As machine learning capabilities grow and models are deployed directly to personal devices with limited computing capability, model compression algorithms such as distillation are often used. Understanding the privacy risk of these algorithms is therefore an increasingly important topic. - The paper is very easy to read and well motivated. Particularly, the visualizations in figure 3 and 4 are very intuitive and help tell the story - The paper considers many experimental evaluation settings (albeit limited detail about the concrete experiments) Weaknesses: - An obvious mitigation of membership inference attacks is Differential Privacy, it would be interesting to see how effective DP mitigates the attack as by Mireshghallah et al [2022]. - Limited detail of experimental setup. What is the utility of the models? - Some conclusions seem not deeply thought through. E.g. Line 213, the authors claim that since there are examples for which vulnerability drops by only single digit percentage points and since privacy is a worst case guarantee the authors conclude that distillation provides limited privacy. While I agree that distillation provides limited privacy, it is possible that least vulnerable points see the single digit drop in vulnerability whereas the most vulnerable points see a more significant drop. **References** Mireshghallah, Fatemehsadat, et al. "Differentially private model compression." Advances in Neural Information Processing Systems 35 (2022): 29468-29483. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: What is the utility of the models? Is privacy simply correlated with utility? Do we see the same percentage point drops in model utility when going from teacher to student? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 4 excellent Contribution: 3 good Limitations: The authors have accurately described limitations albeit only in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time in reading the submission and writing the review! >**Differential privacy:** This is an interesting question. Part of the goal of our work was to consider distillation *without privacy* because there exist past works that attempt to show distillation can achieve a strong notion of privacy. Thus, our aim was to show this is not the case. Investigating differential privacy’s impact on these attacks is interesting future work. We also remark that the Mireshghallah et al work considers a slightly different threat model where the teacher and student datasets are identical, which we show in the Supplemental material to be much more vulnerable to attack than the threat model where the teacher is sensitive and the student is public, nonsensitive data. We can also be more clear when we cite the Mireshghallah et al work in Section 2.2 that it would offer a provable guarantee against our attacks. >**Model utility:** Our CIFAR-10 models reach roughly 88% accuracy. This training setup reaches >94% accuracy on the full CIFAR-10 training set. On Purchase100 the student models reach 74-75% accuracy, and on Texas100 the models reach 54-55% accuracy (note these are 100 class tasks). Unfortunately, we didn’t save test predictions on WikiText, so we need to retrain a model to get its utility, please bear with us. We’ll put these numbers in the paper. >**Privacy as a worst case:** It’s true that the scenario you propose could offer worst-case privacy protection, but only if this were true for *all very vulnerable points*; we broadly find that this is not the case in Figure 3. We will be more careful about our wording here and ensure we highlight that not all vulnerable data see significant reductions in membership inference vulnerability. >**Privacy and utility?:** Though utility and vulnerability are often correlated, out attacks significantly outperform what can be inferred by simple attacks based on the accuracy of the model (i.e., the Yeom et al. 2018 attack, see Figure 12). Further, our attacks are not always well correlated with utility: there exist other factors that influence its success. For example, we find that increasing temperature (Figure 5d) does not change utility significantly, but does increase vulnerability. Distilling with CIFAR-100 examples (Figure 6) leads to less utility compared to distilling with CIFAR-10 examples (roughly 85% compared to 88%), but interestingly leads to higher vulnerability. Deduplicating increases utility slightly (88% instead of 87.5% accuracy), but reduces vulnerability (Figure 5b). We can add a discussion of this to the paper. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. Particularly for reporting the accuracy numbers. I think including them in the final version will help the reader the get a better picture of the experimental setting and help reproducibility.
Summary: The authors in this paper investigate the efficacy of membership inference attacks (MIA) in model distillation. Their novel attack(s) show that MIA is possible even when the teacher model is only queried on the most influential points in the student inputs. Finally, they also demonstrate how their attacks are the strongest when the teacher set is poisoned OR when the student set ~= teacher set. Strengths: 1. The novelty is strong in this paper. No one has previously tackled MIA in model distillation as the authors here do. 2. The paper is also very easy to read and understand. 3. I really like the novel idea present in this paper (specifically Figure 1): membership of a target example in the teacher’s training set can be derived through querying the teacher model on various student set examples (and these are entirely different examples from the target!). 4. Figure 3 is a great result as well. It might not be as obvious as it is, because the data points are essentially “post-processed” and there should be some sense of privacy after the distillation process. The authors here, show that distillation doesn’t show much privacy benefits. 5. Their investigation on why MIA works well for student models is sound. I really like Figure 5 as well, which shows the MIA effects through duplicates, data poisoning and temperature scaling and the section on student-teacher data drift was an amazing read :-) Weaknesses: 1. I don’t really have any major weaknesses to point out! Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Could this happen because of correlated features present between the student queries and the target example? Acc to Figure 1, the “red” colour present in the student examples are essential for the MIA to work. My intuition is that a higher correlation in features between the student inputs and the teacher target example, should lead to higher success in MIA. Of course, this may be written down as Future Work and I’m not quite sure if such work has been done in this space. I think it might be nice to investigate such correlation effects between the student and teacher examples. 2. I noticed that all the attacks mentioned in this paper, were based off LiRA. Is there a specific reason why the authors succumbed to using LiRA and not other attacks. (It's totally okay to use LiRA but I wonder if there could be other attacks you could have tried/did try out and measure the efficacy of such attacks against the LiRA-based ones; provided you have time to run these expts). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: 1. Need to run a number of shadow models for running a successful MIA (not really a limitation of this approach, but this is a limitation of MIA itself!). Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time in reading our submission and interesting questions! >**Feature correlation:** This is an interesting intuition, and we agree that the example in Figure 1 does seem to be due to this “red”ness (and one might draw similar conclusions about the examples in Figure 9 in the supplement). However, it is unclear how to define and quantify correlation for systematic study: we believe this is very interesting future work. This seems to have implications beyond privacy in understanding distillation’s dark knowledge, and we were unable to find related results in this literature. >**Why use LiRA:** LiRA represents the current state-of-the-art for membership inference. Thus, it represents the strongest baseline and also the strongest starting point for our research. Prior work using distillation as a defense has all predated the LiRA paper and so only evaluated with weaker attacks. This is also a motivation for our work: to understand whether stronger membership inference attacks could challenge this use of distillation. We also see this in Figure 12 in the supplementary material: a simple logit-gap membership inference attack (akin to simpler attacks such as Yeom et al. 2018) is unable to get beyond random guessing on CIFAR-10. --- Rebuttal Comment 1.1: Title: Addressing the rebuttal #1 Comment: Thank you for your rebuttal! I am happy with the responses and I have increased my score by another point (7 --> 8). On a similar note, I found this paper on `déjà vu memorization` (https://arxiv.org/abs/2304.13850) which might be relevant for quantifying memorization through feature correlations. For future work, it might be nice to explore threat models around how privacy could be compromised by learning sensitive (correlated) features!
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Training Chain-of-Thought via Latent-Variable Inference
Accept (poster)
Summary: This paper introduces a principled and practical approach to boost the training of the "chain-of-thought" through the inference of latent variables. Instead of the typical variational inference, the authors opt for a MCMC-EM method to circumvent the issue of posterior collapse, a common occurrence in auto-regressive models with latent variables, which could be exacerbated when dealing with large language models. By treating rationales as latent variables, this methodology enables training without the need for detailed rationales, which can be expensive to produce. Moreover, the authors propose a control-variate technique that reduces training variance while maintaining unbiasedness. The theoretical underpinnings of their approach are validated on the BIG-Bench Hard dataset. Strengths: 1. Although the soft-prompt approach is widely used, this proposal introduces an innovative perspective on 'prompt-finetuning' utilizing a chain-of-thought. 2. The assumptions and derivations presented are straightforward and intuitive with only minor confusion. 3. The authors thoroughly explore the links between their work and previous studies, while also critically examining the limitations of their own findings. 4. The authors validate their theories by applying them to the significant benchmark of BBH. Weaknesses: See questions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. For accept rate in Metroplis-Hastings, why the transition kernel $p(\tilde{z},y|x)$ is not conditional on previous state $z$? 2. For sampling from intractable posterior $p(z|x,y)$, what if you directly sampling from this distribution using finite-step MCMC as $p(z|x,y)\propto p(z|x)p(y|z,x)$? You do not need to add more experiments. Could you explain the difference? 3. For control variate, in appendix you mentioned it may not be that useful in the stochastic setting. For figure 1, I want to know that suppose you train long enough, will the training estimate becomes the same for two settings (with or without control variate) based on your claim in Eq. 3? Meanwhile, what's the notation meaning for expectation over MCMC? 4. How many tokens you use for latent variable $z$? 5. While I commend the authors for validating their approach on an important benchmark, an aspect I deeply appreciate, I believe that testing on a more diverse dataset would bolster the paper's credibility and appeal. This is particularly true when making a comparison with [1]. 6. Do you have failure case analysis and do you need human evaluation? I would be more than willing to raise my review scores if the authors could provide further clarification and deeper insights about the paper. [1] STaR: Self-Taught Reasoner Bootstrapping Reasoning With Reasoning Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors has discussed this paper's limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful questions! We will try to answer them below; please let us know if anything remains unclear or if there are particular points that you would like to see emphasized in the final paper. # 1. Acceptance probability depends on previous state Our use of a binary (deterministic) likelihood and identifying the proposal $q(z\mid x, y)$ with the prior $p(z\mid x)$ dramatically simplifies the acceptance probability, so that we only need to consider whether or not the proposed state leads to a correct answer when deciding whether or not to accept it. If we used a “soft” likelihood function as discussed in Appendix A, we would indeed need to look more closely at the previous state to compute the acceptance probability. # 2. Why not use finite-step MCMC? The main reason we only take a single MCMC step per iteration is to minimize the per-update cost of the algorithm; the cost of drawing a sample is comparable to the cost of a gradient update for the prompt, so it seems natural to balance these costs. # 3. Is the control variate necessary late in training? If anything, the control variate is probably most helpful late in training, since the correlation between the control variate and the base gradient estimator increases with the accuracy of the model. There may be subtle effects on what kinds of local optima the two gradient estimators tend to yield, but these might be reduced by using a smaller learning rate and more training steps. # 3. What’s meant by $\mathbb{E}_\textrm{MCMC}$? The expectation is with respect to the randomness in the MCMC proposal, which for TRICE is just the proposed sample from $p_\theta(z\mid x)$. We will change the notation to make this clearer. # 4. How many tokens per rationale? We cap the number of tokens per rationale at 1.25 times the length of the longest of the three exemplar rationales used to initialize the soft prompt. The goal is to ensure that we have plenty of space to generate good rationales without unnecessarily increasing the cost of sampling and gradient evaluation. Thank you for pointing out that this information is missing from the paper! We’ll include it in the next version. # 5. Other datasets. See the comment at the top of the page for results on GSM8K. # 6. Failure cases. We will add examples of rationales that lead to both correct and incorrect answers in an appendix. # 6. Human evaluation. Formal human evaluation is beyond the scope of this paper, but will be important in the future in order to determine to what extent models trained using TRICE get answers right for the right reasons. We will add some discussion of this point to the Limitations section. --- Rebuttal Comment 1.1: Comment: Thank you for your thoughtful response. I appreciate your feedback on my previous concerns and agree with Reviewer AjuJ about the need to enhance the notations in this paper. While I understand the paper utilizes a proprietary model, it would be greatly beneficial if an open-source counterpart could be provided. --- Reply to Comment 1.1.1: Title: reference implementation Comment: (Largely copy-pasted from response to Reviewer d8Yh) We will open-source an IPython notebook with a reference implementation of TRICE that is agnostic to the backend LLM. To get it to work with a new pretrained LLM, the user will have to provide four callables with the following signatures: ``` sample(params, context, num_steps, seed) log_prob(params, context, continuation) grad(params, context, continuation) init(seed) ``` `init` should return an initial value for a PyTree `params` ($\theta$ in the notation of the paper). `sample` should draw a sample `continuation` ($z$ in the notation of the paper) from the fine/prompt-tuned LLM of maximum `num_steps` tokens given `context` ($x$ in the notation in the paper). `log_prob` should compute the log-probability of generating `continuation` given `context` and `params` ($\log p_\theta(z\mid x)$), and `grad` should be the gradient of `log_prob` w.r.t. `params` ($\nabla_\theta\log p_\theta(z\mid x)$). `context` and `continuation` should be Python strings, which may incur a bit of a performance hit with unnecessary tokenization and detokenization but simplifies the API. If the user provides these four functions, the notebook will run a short training loop of TRICE on a BBH task. Hopefully this will make it easier for others to reproduce our results using their own LLMs (whether open-source or proprietary).
Summary: This paper explores considering Chain-of-Thoughts (CoT) as a latent variable and introduces TRICE: an MCMC-EM algorithm for optimizing CoT-latent-latent variable models using a Metropolis-Hastings algorithm (rejection sampling) coupled with a control variate. The authors tested the model by fine-tuning a proprietary LLM on 27 BIG-bench hard tasks and report improvements when compared to direct prompt tuning and STaR. The authors tested the effectiveness of the control variate but did not test the method against more conventional optimisation methods. Strengths: Overall, This paper discusses a well-defined issue and explores innovative solutions to the optimization problem, it was a refreshing and educative read, thank you! Main strengths: 1. Optimizing LLMs as latent variable models has great potential and the authors present the problem well 2. The authors introduce an innovative gradient estimator (MCMC-EM + control variate) which is computationally efficient (single sample per data point) and seems to work very well 3. The authors demonstrate progress on a subset of the MMLU dataset 4. Theoretical and experimental limitations are thoroughly discussed Weaknesses: Overall, the paper lacks structure and more extensive comparison with the existing literature on Monte Carlo methods. I believe not addressing these points would greatly reduce the impact of this paper. Nevertheless, I would happily accept this paper if these limitations are resolved! 1. The paper overall lacks structure. Although I appreciated the lengthy discussions, I think the main storyline could be made more clear by separating the main ideas from the discussions. 2. The optimization method is only presented using full text and Algorithm 1. This paper would convey its idea more clearly using a set of two additional equations: A) expression of the *true* gradients and B) Monte Carlo estimate (which translates into Algorithm 1). 3. Classic gradient estimation methods are not covered sufficiently and some of the statements in section 2.2 *Why not variational inference?* are misleading. 1. Posterior collapse (discussed in Lucas et al., 2019) is an issue inherent to the optimization of the proposal distribution $q(z | x, y)$ and $q$ is in this paper never optimized. 2. The author discusses the limitations of the ELBO, but the ELBO only arises when using a single-sample estimate of the log-likelihood. When working with discrete latent variables, it is more common to use the Importance Weighted Bound (IWB). I.e., use a K-samples importance-weighted estimate of the likelihood (e.g., $\log p(y | x) \approx \log \frac{1}{K} \sum_{j=1}^K \frac{p(y, z^{(j)} | x)}{q(z^{(j)}|x,y)})$ -- as done in ["Revisiting Reweighted Wake-Sleep for Models with Stochastic Control Flow", Let et al., 2019](https://arxiv.org/abs/1805.10469)) which variance scales as $\sqrt{1/K}$. See ["Monte Carlo Gradient Estimation in Machine Learning", Mohamed et al., 2020] for more background on optimization methods. 6. The experimental comparison with RWS is not included in the paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How exactly did you compute the gradients in the RWS experiment? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. I stumbled upon this PhD thesis: [Deep Latent Variable Models for Natural Language Processing, Lievin 2022](http://vlievin.github.io/deep-lvms-for-nlp.pdf), which discusses similar ideas. You might not have been the first ones to consider that "CoT methods are latent-variable probabilistic models" Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for the thoughtful questions and suggestions. We hope that our response below will address your concerns. # 1. Structural improvements This is a great suggestion. To reduce the risk that readers will lose the main thread, in the final version we will more clearly set apart as “remarks” discussions that are meant to provide context and intuitions. # 2. Providing true gradient and Monte Carlo estimate The true gradient of the per-example log-likelihood is given in Equation 2, and the Monte Carlo estimate is given in Equation 3 and expanded in Equation 4. Perhaps we’re misunderstanding your suggestion? # 3, 4: VI, RWS, and the IWB Thank you for pointing out some ways in which we can make this section clearer. We have rewritten section 2.2 to focus more on RWS and the IWB (which we also tried before settling on MCMC-EM; see below) than on VI. Hopefully the version below is an improvement—we’d be very grateful for any further feedback! ## 2.2 Why not variational inference, reweighted wake-sleep, or the importance-weighted bound? We considered three alternatives to the MCMC-EM approach that we pursue in this paper: variational EM (e.g., Bishop, 2006), reweighted wake-sleep (RWS; Bornschein & Bengio, 2015; Le et al., 2019), and the importance-weighted bound (IWB) used in importance-weighted autoencoders (IWAE; Burda et al., 2015). Variational EM is a common strategy for training latent-variable models, but variational inference with discrete latent variables is challenging (e.g., Tucker et al., 2017). [pointer to VI discussion?] RWS is an attractive alternative that avoids high-variance score-function gradients; it proceeds by sampling $M$ samples $z_{1:M}$ from a guide model $q_\phi(z\mid x, y)$, assigning the samples weights $w_m\propto \frac{p_\theta(y, z\mid x)}{q_\phi(z\mid x)}$, and updating both the model parameters $\theta$ and the guide parameters $\phi$ to maximize the reweighted log-probabilities $\sum_m w_m \log p_\theta(z_m\mid x)$ and $\sum_m w_m \log q_\phi (z_m\mid x)$. Unfortunately, we found that RWS training sometimes produced models that produced degenerate zero-length rationales $z$. Digging deeper, we found that the weights $w_m$ tend to be larger for shorter sequences $z_m$, so the model and guide learn to produce shorter and shorter sequences until they consistently produce empty rationales. Why do longer sequences tend to get lower weights? We can write the unnormalized weights as $\tilde w_m = (c(y, z_m) + \epsilon)\frac{p_\theta(z_m\mid x)}{q_\phi(z_m\mid x)} = c(y, z_m)\prod_{t=1}^{T_m} \frac{p_\theta(z_{m,t}\mid x, z_{m,1:t-1})}{q_\phi(z_{m,t}\mid x, z_{m,1:t-1})}$, where $T_m$ is the length of $z_m$, and $\epsilon$ is added to address the case where none of the $M$ samples are correct. If there is a mismatch between $q(z_{m,t}\mid x, z_{m,1:t-1}))$ and $p(z_{m,t}\mid x, z_{m,1:t-1}))$, then $\frac{p_\theta(z_{m,t}\mid x, z_{m,1:t-1})}{q_\phi(z_{m,t}\mid x, z_{m,1:t-1})}$ will typically be less than one, with a few rare exceptions with extremely high weights ensuring that $\mathbb{E}_q[p(z\mid x)/q(z\mid x)]=1.$ If these exceptions are rare enough that they do not typically appear in a sample of $M$ sequences $z_{1:M}$, then the normalized weights $w_{1:M} = \frac{\tilde w_{1:M}}{\sum_m \tilde w_m}$ will tend to assign higher mass to shorter sequences unless those shorter sequences are much less likely to be correct. Finally, we considered optimizing the $M$-sample IWB $\mathcal{L}(x, y, \theta) \triangleq \mathbb{E}_{p}[\log \frac{1}{M}\sum_m\frac{p(y, z\mid x) + \epsilon}{p(z\mid x)}]$ (where we use $p_\theta(z\mid x)$ as an importance-sampling proposal, $M$ is the number of draws from $p_\theta(z\mid x)$ and $\epsilon$ is added to avoid taking the log of zero). This works well as long as the number of samples per example $M$ is large enough, but sampling several rationales per example carries a high per-step cost; the independence-chain MCMC strategy we use in TRICE lets us do $M$ times as many parameter updates per sample from $p_\theta(z\mid x)$. > How exactly did you compute the gradients in the RWS experiment? We draw $M$ samples $z_{1:M}$ from a guide model $q_\phi(z\mid x, y)$, compute the unnormalized weights $\tilde{w}_m = \frac{p(z\mid x)\epsilon^{(1 - c(y,z))}}{q(z\mid x,y)},$ in the log-space, assign the weights $w_m = \mathrm{stopgradient}(\tilde{w}_m / \sum_m\tilde{w}_m),$ and optimize the loss $\sum_m w_m \left(-\log p_\theta(z_m\mid x) - \log q_\phi(z_m\mid x,y)\right)$. This objective gives us an estimation $\sum_m -w_m \nabla_\theta \log p_\theta(z_m\mid x)$ for the gradient w.r.t. $\theta$ and $\sum_m -w_m \nabla_\phi \log q_\phi(z_m\mid x,y)$ for the gradient w.r.t. $\phi$. The $\epsilon^{(1 - c(y,z))}$ factor represents the likelihood $p(y\mid x,z)$. The likelihood of arriving at the correct answer given a correct rationale is 1 and the likelihood of arriving at the correct answer given a wrong rationale is $\epsilon$. This has the same effect as adding $\epsilon$ to the binary likelihood that we described above: to avoid taking the log of zero when all rationales are wrong. In the experiments, we set $\epsilon = 0.001$. # Lievin thesis Thanks for the reference to Lievin thesis! We will cite it in the revision. Although that work describes a latent-variable chain-of-thought model, we note that the inference methods used for the COT model are relatively simple (in contrast to other models from the thesis), similar in spirit to self-consistent COT. Our work exploits the connection to probabilistic modeling to apply more sophisticated inference algorithms; we are not aware of previous work that takes this strategy. In contrast, previous work focuses on simple Monte Carlo (self-consistent COT, Lievin 2022), or rejection sampling (STaR). --- Rebuttal Comment 1.1: Title: Improving clarity and presenting main equations more clearly Comment: ### 1. Structural improvement Enclosing remarks would indeed help. ### 2. Providing true gradient and Monte Carlo estimate Yes, the true gradient is provided in Eq 2. However, the expression of the gradient approximation is spread across sections "**2.1 derivations**" and Eq 3&4. Equations 3&4 (which are the core equations defining TRICE) are enclosed in a section titled "**6 Adding a control variate**". A better structure, better choice of section titles and an improved framing of the equations would help. 1. Gradient $$\eta_\theta = E_{p_\theta(z | x, y)} \left[ \nabla_\theta \log p_\theta(z | x) \right]$$ 2. Gradient approximation via MCMC $$\eta_\theta \approx E_{\mathrm{MCMC}} \left[ \nabla_\theta \log p_\theta (z | x) \right] $$ 3. Gradient approximation via MCMC with control variate $$\eta_\theta \approx \ldots $$ **NB**: Similarly to Reviewer zJeM, I find the notation $E_{\mathrm{MCMC}}\left[ \cdot \right]$ confusing. --- Reply to Comment 1.1.1: Title: Thanks for the suggestions! Comment: Thanks for expanding on your suggestions about improving the presentation of the gradient estimators! We very much appreciate your taking the time to help us improve the paper's presentation. We agree that a structure along the lines you suggest will be clearer: we will explicitly delineate the path from 1. true-but-intractable gradient to 2. MCMC estimate to 3. variance-reduced MCMC estimate. And we agree that the $\mathbb{E}_\mathrm{MCMC}$ notation is a bit ambiguous. We will rework it to something more explicit (and closer to what's in Algorithm 1), something along the lines of $z' = c(\\tilde z, y)\\tilde z + (1-c(\\tilde z, y))z; \\ \\hat g = \\nabla_\\theta\\log p_\\theta(z'\\mid x); \\ \\mathbb{E}_{z,\\tilde z}[\\hat g\\mid\\theta] \\approx \\mathbb{E}\_{p\_\\theta(z\\mid x, y)}[\\nabla\_\\theta \\log p\_\\theta(z\\mid x)],$ where $\mathbb{E}\_{z,\tilde z}[\cdot\mid\theta]$ denotes an expectation with respect to both the proposal $\tilde z$ and the previous state $z$. This in turn lets us write the derivation of the control-variate estimator as $\mathbb{E}\_{z,\tilde z}[\hat g\mid \theta] = \mathbb{E}\_{z}[\mathbb{E}\_{\tilde z}[\nabla\_\theta\log p\_\theta(z'\mid x)] \mid\theta]$ $= \mathbb{E}\_{z}[\mathbb{E}\_{\tilde z}[\nabla\_\theta\log p\_\theta(z'\mid x) - \beta\nabla\_\theta\log p\_\theta(\tilde z\mid x)] \mid\theta]$ which is hopefully also a bit clearer. --- Rebuttal Comment 1.2: Title: VI, RWS, and the IWB - great insights Comment: Thank you for the clarification. It makes sense you obtain zero-length rationales, and I understand why you relate it to posterior collapse. > How exactly did you compute the gradients in the RWS experiment? Thank you for the details. What are your thoughts on optimizing $\theta$ using the "M-sample IWB" only? Meaning, using the expression of the RWS gradients you provided and using, for instance, $p(z|x,y) \approx p_\theta(z|x) c(x,y)$ as sampling distribution? How does it relate to TRICE? --- Reply to Comment 1.2.1: Title: M-sample IWB optimization Comment: We did try an M-sample IWB strategy (equivalent to RWS with a fixed $q(z\\mid x, y) = p\_\\theta(z\\mid x)$), and found that it worked well as long as the number of samples $M$ was large enough. The main advantage of TRICE over this IWB approach is that it lets us draw $M$ times fewer samples per update, which allows for a significant improvement in wall-clock time even for relatively small values of $M$. (A secondary advantage is that TRICE frees us from having to choose an appropriate value of $M$ that balances bias and computational cost.) If we do multiple MCMC updates in TRICE before computing a gradient update, it begins to look more like the variant of IWB optimization where we do $M$ forward passes and randomly choose a single particle to compute the backwards pass to save computation (as suggested at the end of section 3 in the 2015 IWAE paper). The $M$ proposals can be generated and evaluated in parallel, and the state we compute gradients on is the "last" one to be accepted ("last" in scare quotes since the order is arbitrary). The only difference is the case where none of the $M$ particles yield correct answers, in which case TRICE still has the option of falling back on a state generated in a previous iteration. --- Rebuttal 2: Title: Increasing my rating 4 -> 6 Comment: Thank you for providing more details about your experiments with the IWB. Based on our discussion, I understand that you have experimented with many objectives, including multi-sample objectives (IWB). Trusting that you will include more details in the paper (comparison with a vanilla IWB objective) and that you will improve the structure of the paper (equations), I am increasing my rating as follows: - **presentation**: 2 -> 3 - **soundness**: 2 -> 3 - **rating**: 4 -> 6 I would be willing to increase my rating further if the experiment section was to be expanded to include - a comparison with the IWB and/or other objectives you have experimented with. - insights on the variance of TRICE compared to other baselines --- Rebuttal Comment 2.1: Title: IWB results Comment: Thank you for being so engaged in the discussion process! We ran (and will include in the final paper) an additional set of BBH experiments (with the higher-quality base model discussed above) using the IWB with 4, 8, and 16 particles and the bootstrapped initial parameters. The results (with TRICE for comparison) are: |Eval Method|4\-Sample IWB|8\-Sample IWB|16\-Sample IWB|TRICE| |-|-|-|-|-| |Greedy Decoding|64.6|67.0|66.2|72.8| |Single CoT Sample|64.2|65.6|64.6|72.5| |Self Consistency (40 CoT Samples)|65.3|67.4|66.6|73.1| We will also include a figure demonstrating the qualitative behavior of RWS.
Summary: This work introduces a new method for training models for chain-of-thought prompting, aimed at maximizing the marginal probability of generating a correct answer using Markov-chain Monte Carlo, expectation maximization, and a novel control variate technique, along with prompt tuning. Much of the work is spent deriving and analyzing the method (e.g. compared to related methods like self-taught reasoner STaR), with a final section dedicated to testing the method and demonstrating very significant performance gains over vanilla prompt tuning (no CoT) and STaR. Strengths: - Principled, novel method along with extensive derivation. Particularly, this well motivates the need for taking a marginalization framing rather than the greedy approach often used before - It is generally very useful to see more theoretical justification provided in this area. Past works (e.g. STaR) provide many novel ideas, but it is useful to take a more comprehensive (and marginalization-based) approach to the problem - extensive theoretical/technical rationale for method over other options (STaR, variational inference) - very strong experimental results, demonstrating significant gains over STaR, another recent method for training for CoT prompting Weaknesses: - Fairly limited analysis of experimental results (mostly aggregate results on Big Bench tasks in the main paper, without too much specific analysis) - It would also be useful/interesting to see what the generated rationales look like: how do they differ from e.g. STaR, and how often are they correct in a human-interpretable way rather than just leading to a correct answer? Adding some information about this in the main paper would be useful - Fairly limited discussion of limitations Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: None Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Limitations section is quite limited. It would be useful to see e.g. a discussion of the effects of rationales on user trust (will having rationales make users trust models more even if the rationales are wrong?) Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive feedback! # Additional experimental results While there is limited space in the main text, we will add an appendix with a full table of per-task BBH results. Also, we added an experiment on GSM8K; see the top comment for details. ### Qualitative analysis of rationales Thank you for this suggestion—we will add some example rationales (both correct and incorrect) from TRICE and STaR to the final version. # Discussion of user trust Thank you for your suggestion that we add some discussion of the impact of rationales on user trust to the limitations section. On the one hand, providing rationales may make it easier for motivated users to judge the trustworthiness of LLMs for themselves; if an LLM's reasons are bad, we should consider its conclusions suspect. On the other hand, many users may not bother to read and critique an LLM's rationales, and may take a rationale's mere existence as evidence of correctness. While the focus of this work is on improving LLM capabilities, we should at least acknowledge that much work remains to be done on alignment questions such as this. --- Rebuttal Comment 1.1: Comment: Thank you for the response! I look forward to this further information in later versions of the paper
Summary: This paper proposes treating rationales as latent-variables and considers the marginal distribution over answers, averaging over possible rationales. To do so, a MCMC procedure is proposed, in which rationales are proposed via an independence sampler. The approach is compared to STaR (Zelikman et al., 2022) as a baseline on 27 BigBench-HARD tasks. Rather than fine-tune all the model weights, prompt tuning is used a parameter-efficient alternative. The proposed approach is found to outperform STaR on the subset of tasks considered. Strengths: * The derivations the authors present are detailed and comprehensive. The design choices are explained well and justified in context. * Compared to alternate methods, the approach advocated in this paper seems more principled. * The authors discuss a potential alternate approach (variational Bayes) and justify why it is not appropriate. Weaknesses: * As the authors acknowledge, TRICE is only tested on small datasets. The behavior of the comparison model is significantly different than the expected behavior; the authors speculate this is due to overfitting of the comparison model, but do not compare TRICE’s performance on Commonsense QA or reduce the number of inner loop steps to prevent overfitting. * While the overall approach is novel, it’s not clear that the improvements and alterations proposed present significant innovations on existing methods. Furthermore, I have some concerns about the sensitivity of the approach to hyper-parameters, such as the # of MCMC steps, initialization scheme, etc. * Some implementation details are unclear. The methods as-is would not be reproducible, and it seems that the authors do not plan to release their code, raising concerns about replicability. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * What is the specific $c(z,y)$ used in the experiments? It never seems to be defined— an example is given on line 71, but it’s unclear whether that’s the specific correctness function used in the experiments. * Line 80 says that “For example, the guide might prompt an LLM specifically to give an rationale for the answer”— was this the case for all experiments? * Line 296 says that "an adaptation of the STaR strategy" is tested-- what aspects specifically were adapted? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There is a nominal discussion of limitations but I think this could be significantly expanded. For example, MCMC can be sensitive to hyperparameters and mixing is hard to diagnose. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for the thoughtful questions and suggestions. We hope that our response below will address your concerns. # Larger datasets We added an experiment on GSM8K; see the top comment for details. # Sensitivity to number of MCMC steps In Algorithm 1 (and in all of our experiments) we just do a single MCMC update per iteration. It might be interesting to explore treating the number of MCMC steps per iteration as a hyperparameter, but we feel that 1 is a natural default that allows for the prompt to be updated as often as possible. (This is somewhat analogous to the common practice of training VAEs using single-sample estimates of the per-example loss.) # Sensitivity to initialization Prompt tuning has been known to be sensitive to initialization from its inception; for example, random initializations tend to underperform initializations based on embeddings of real text sequences (e.g., [3]). We also found that initializing the soft prompt with human-generated rationales (rather than bootstrapped rationales) does slightly improve TRICE’s performance (Table 1), but both initialization schemes worked well. # Reproducibility Since our code uses a proprietary model, we cannot release the code we used in our experiments. We tried to include enough information in the main text for a reader to reimplement these experiments, but we may have missed a few details—we will add an appendix with pseudocode describing in more detail all of the algorithms that we ran, which may be a clearer format for providing this information than the dense text blocks in Section 4. We appreciate your identifying some specific points that were not clear in the “questions” section above (which we will address below)—if you notice any other details that are missing from the main text please do let us know! # MCMC hyperparameters and mixing diagnostics While MCMC kernels in general are often sensitive to hyperparameters and difficult to diagnose, the particular independence-chain proposal we use in TRICE has no hyperparameters (it simply makes proposals from the prior) and, in the binary-likelihood setting we consider, generates unbiased samples as soon as it accepts a proposal (since it is effectively doing rejection sampling on the posterior). We will emphasize these points in the final version. # Answers to specific questions > What is the specific about c(z, y) used in the experiments? Thank you for pointing out that this was not precise. We compare the final tokens of $z$ to the tokens of the target “the answer is {y}.”. This is to account for various formats of the answers like “So the answer is (A).” or “Thus the answer is (A).” We will clarify this in the paper. > Line 80 says that “For example, the guide might prompt an LLM specifically to give a rationale for the answer”— was this the case for all experiments? Yes, we used the same prompt template across all experiments. We described the template in the Experiments section (lines 267-272). > Line 296 says that "an adaptation of the STaR strategy" is tested-- what aspects specifically were adapted? In the STaR paper, the authors performed fine-tuning of all LLM parameters while in our work, we use a parameter-efficient tuning method, namely prompt tuning. We will clarify this in the paper. \ \ \ [3] Li, Xiang Lisa, and Percy Liang. "Prefix-tuning: Optimizing continuous prompts for generation." _arXiv preprint arXiv:2101.00190_ (2021). --- Rebuttal Comment 1.1: Comment: Thank you for the response, which has clarified some details that were unclear in the initial submission. Although the code used in the experiments may be proprietary, this doesn't preclude producing a separate reference open-source implementation (perhaps with reduced capabilities, working with open-source LLMs, etc.). --- Reply to Comment 1.1.1: Title: Reference implementation Comment: > Although the code used in the experiments may be proprietary, this doesn't preclude producing a separate reference open-source implementation... This is an excellent suggestion! We will open-source an IPython notebook with a reference implementation of TRICE that is agnostic to the backend LLM. To get it to work with a new pretrained LLM, the user will have to provide four callables with the following signatures: ``` sample(params, context, num_steps, seed) log_prob(params, context, continuation) grad(params, context, continuation) init(seed) ``` `init` should return an initial value for a PyTree `params` ($\theta$ in the notation of the paper). `sample` should draw a sample `continuation` ($z$ in the notation of the paper) from the fine/prompt-tuned LLM of maximum `num_steps` tokens given `context` ($x$ in the notation in the paper). `log_prob` should compute the log-probability of generating `continuation` given `context` and `params` ($\log p_\theta(z\mid x)$), and `grad` should be the gradient of `log_prob` w.r.t. `params` ($\nabla_\theta\log p_\theta(z\mid x)$). `context` and `continuation` should be Python strings, which may incur a bit of a performance hit with unnecessary tokenization and detokenization but simplifies the API. If the user provides these four functions, the notebook will run a short training loop of TRICE on a BBH task. Hopefully this will make it easier for others to reproduce our results using their own LLMs (whether open-source or proprietary).
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their many helpful and thoughtful comments. Most of our responses are in the per-review comments, but we want to highlight here two sets of new experimental results that we will add to the paper. # BIG-Bench Hard results with a stronger base model After our initial submission, we reran the TRICE BBH experiments using bootstrapped initialization and a higher-quality pretrained model of similar size. Like our original base model, this model is in the 40–75B range*, significantly smaller than GPT-3's 175B parameters. When prompt-tuning this improved model, TRICE gets an average greedy-decoding accuracy on BBH of 72.8%, significantly better than our previous 70.1%. # GSM8K results After our initial submission, we ran the algorithms on the GSM8K dataset [1], which includes 7,473 training examples and 1,319 testing examples, using the same higher-quality pretrained model discussed above. Below is a summary of the results with bootstrapped-initialization CoT (except for direct tuning): | Direct tuning | No tune COT | STaR | SFT | SFT (with SC) | TRICE | TRICE (with SC) | |-|-|-|-|-|-|-| | 5.8 | 22.4 | 40.5 | 56.3 | 73.1 | 60.0 | 75.6 | SFT: Supervised fine-tuning. The training dataset includes example rationales for each question; we prompt-tune the model using these pairs of questions and rationales. SC: Self-consistency majority voting using 40 samples. STaR performs better than direct tuning on this task, but still underperforms compared to TRICE and SFT. On the other hand, TRICE outperforms SFT even though TRICE does not train on human-generated rationales. We conjecture that this may be because SFT with cross-entropy loss gives more weight to the style of the rationales than to their content, whereas the marginal log-likelihood loss that TRICE minimizes is insensitive to precise wording. \ \ \* We originally disclosed a larger range of 15–100B parameters, but proprietary models in this smaller 40–75B range have been publicly disclosed by many organizations such as Anthropic, Bloomberg, DeepMind, Google, Meta, and Stability, so we feel comfortable providing this narrower range. [1] Cobbe, Karl, et al. "Training verifiers to solve math word problems." _arXiv preprint arXiv:2110.14168_ (2021).
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes a prompt-tuning strategy that tries to maximize the marginal log-likelihood of generating a correct answer using CoT prompting. Strengths: The paper presents a prompt-tuning strategy that aims to optimize the marginal log-likelihood of producing accurate answers through the utilization of CoT prompting. And the paper uses simple Markov chain Monte Carlo (MCMC) expectation-maximization (EM) algorithm, memoized wake-sleep, and persistent contrastive divergence to address the challenge of sampling from the posterior. The results look like promising over other baselines. Weaknesses: The author did not provide specific information about the model details. If the model size is close to 100 billion, it would be beneficial for the paper to include a baseline comparison with text-davinci-003 (Zero-shot or few shot learning). Otherwise, I feel it unnecessary to conduct prompt tuning on a large-scale model if its performance is still inferior to that of zero/few-shot learning using the same scale model but from GPT family. If the author's intention is to compare with open-source models, it is essential to include Alpaca-CoT (https://github.com/PhoebusSi/alpaca-CoT), regardless of the specific model or fine-tuning technique they employ. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See weakness part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: See weakness part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your suggestion to compare against text-davinci-003. Our model size is in the 40–75 billion parameter range, making it significantly smaller than text-davinci-003’s 175B parameters. We will add published text-davinci-003 BIG-Bench Hard (BBH) 3-shot results to Table 1 for comparison (these results are taken from [2]). The average accuracy results are summarized below: | text-davinci-003 (3-shot COT) | No tuning (3-shot COT) | TRICE (3-shot COT) | TRICE (bootstrapped COT) | |-|-|-|-| | 70.7 | 44.6 | 72.2 | 70.1 | We used the same base model to obtain results for the last 3 columns. The table above demonstrates that even though we begin with a smaller, weaker model than text-davinci-003, by doing TRICE prompt tuning, we can roughly match (if bootstrapping an initialization) or slightly improve on (if using a few-shot human-generated CoT initialization) the performance of the larger model. Also, as we discuss in the comment at the top of the page, we reran the TRICE BBH experiments using a higher-quality pretrained model, which improved our bootstrapped-initialization results from 70.1% to 72.8%. \ \ [2] “Scaling Instruction-Finetuned Language Models”, Chung et al., 2022, https://arxiv.org/pdf/2210.11416.pdf, table 19
null
null
null
null
null
null
3D molecule generation by denoising voxel grids
Accept (poster)
Summary: This paper describes a new method of unconditional small molecule generation by parameterizing small molecules as 3d voxel arrays. Strengths: - Novel characterization of the small molecule generation task and a new way to parameterize the data. This comes with benefits, chief among which is the ability to generate without a known number of atoms. - I appreciate the discussion in section 4.3, where the authors dive into some problems with the metrics that they have chosen. - The ablation studies are illuminating. - The paper is well-written and easy to follow. Weaknesses: - Unconditional generation of small molecules is a hard task to evaluate. There are no good metrics to optimize for, as the paper indicates. Nevertheless, MiDi (work that the authors cite and compare against) has a more representative set of metrics they compare against. I suggest the authors do the same. - Weaknesses of the one-step denoising procedure are not discussed. These include, for example, the inability to control generation through the use of auxiliary gradient signals, such as classifier-free guidance. - The authors don't mention the implications of having a fixed grid size for their generative model. This means that the molecules generated need to be within a specified volume, restricting the output space. They should contrast this to the fixed atom number of competing models. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Line 243 should be as noise *decreases*, correct? - We consistently see a large drop in the validity when the noise level goes from $\sigma = 0.9$ to $\sigma = 1$, why is such a small change in noise so detrimental? What are the characteristics of the generated molecules at these levels of noise? What happens if you train a model with $\sigma = 2$? - What is the relevance of the number of aromatic rings per molecule? Especially given that this is one of the only metrics where the proposed model outperforms, I would like a good rationale for including this metric. - What is the relevance of the validity metric, if that is something that can be automatically detected? It seems in this case that generating unique molecules is more relevant, especially if there are no other metrics to optimize. - How can this model be extended to allow for conditional generation, given that most of the other methods being compared to have this ability? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Mainly, the authors should find better metrics to evaluate their work, such as those mentioned in the MiDi paper (at a minimum). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and suggestions. Below we address additional concerns from the reviewer. **Metrics:** We agree with the reviewers that unconditional molecule generation is hard to evaluate. We followed the reviewer's suggestion and evaluated VoxMol on the MiDi metrics (Table 1 and 2). VoxMol is slightly worse than EDM on the simple QM9 while performing much better than EDM on the more challenging/realistic GEOM-drugs dataset. We will update the tables and plots of the experiments to reflect the new metrics. **Limitations of one-step denoising:** Neural empirical Bayes (NEB) performs denoising in 1 step while diffusion models do in multiple (eg, 1000) steps. Because it is a single-step denoising process (from a high noise level), it can be challenging to capture high-frequency components of the signal (eg, generated images are usually more blurred than diffusion model samples). We observed empirically that VoxMol is able to denoise voxelized molecules (they are signals with much "lower frequencies" than natural images), and therefore NEB is a good fit for this problem–as we show in experiments. We briefly mention this on L154-158, but will make it more explicit in the text. **Conditional generation:** Both NEB and diffusion models use a (learned) score function to sample with Langevin MCMC.. Therefore, we can in principle also guide the sampling by constraining the score function at each step of the MCMC chain. The Langevin MCMC is used very differently in the two approaches. In DM, the chains transition from pure noise to the molecule (that is, one chain per sample). In NEB, the chain keeps sampling noisy molecules from p(y) (and we get clean molecules by denoising them with xhat). Please see the paragraph on guided/conditional sampling on the main rebuttal. **No mention about the implications of fixed grid size:** With a fixed voxel grid, we are limited to a fixed volume in space (we mentioned this on L127, but will be more clear). With a 64^3 voxel grid (at resolution .25A), we can fit over 99% of the drug-like molecules from GEOM-drugs dataset. We wrote about the pros/cons of voxels vs point cloud (L100-117), but will be more specific on this tradeoff between max volume limitation/number of atoms. **Drop in validity from sigma=.9 to sigma=1:** As we increase sigma, denoising gets more and more difficult to learn. At the same time, the smoother the space of noisy molecules is (ie, higher the sigma), the easier it gets to sample from this space with Langevin MCMC. In practice, we need to find the highest sigma such that the U-Net can still learn to denoise. sigma=0.9 is the best value we found on our hyperparameter search (on the validation set). We also found this surprising; it seems that higher sigma values work on natural images (Saremi and Hyvarinen 19, Saremi and Srivastava, 22). We hypothesize this has to do with the sparsity of voxelized molecules compared to natural images, but it needs more investigation. We mention this in L174-176, but will provide a better discussion. **Measuring number of generated rings:** Hoogeboom et al., 22 (on appendix) points that predicting single bonds can easily increase validity. This metric was added to show that VoxMol achieves its performance without "cheating" and oversampling single bounds–in fact, it matches better the average number of rings from the ground truth dataset. Fig 8 shows that EDM undersample rings and oversamples single and double bonds, while VoxMol matches better the number of bonds. We recognize the rationale for this metric was not clearly explained in the text. We will move this discussion to the appendix of the paper and report the same metrics as in the MiDi paper, as suggested by the reviewers. **L243:** yes! Thanks for correcting. **Relevance of validity metrics:** We used validity metrics as they correlate with other metrics we measured. Following the reviewer's suggestion, we will report the MiDi metrics (Table 1 and 2 on the attached pdf) on the updated version of the paper. --- Rebuttal Comment 1.1: Title: Increased score Comment: I thank the authors for a detailed rebuttal. I believe the work is now stronger than when first presented. I have increased my score commensurate with the improved quality to a rating of accept. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the reviews and feedbacks. We particularly appreciate the suggestion of using the new set of metrics. We will incorporate the feedbacks on the updated version of the manuscript.
Summary: This paper proposes VoxMol, a novel method for generating 3D molecules in the form of voxel grids. The proposed method adopts neural empirical Bayes as the basic probabilistic framework to develop generative models for 3D voxel grid representations of molecules. Experiments are conducted to show that the proposed method can successfully generate valid 3D molecules. Strengths: - This work proposes a novel method of generating 3D molecules in the form of 3D voxel grids by neural empirical Bayes framework. The exploration of generating 3D voxel grids representation of molecules is interesting and meaningful to the development of efficient drug discovery applications. - The writing of this paper is well-organized and easy to follow. Weaknesses: - In this paper, generating 3D molecules in the form of 3D voxel grids by VoxMol is argued to be advantageous compred with 3D molecular graphs because the number of atoms is required to know beforehand, the distributions of continuous atom coordinates and discrete atom features are not treated separately, and simpler and faster training and generation can be achieved. However, in my opinion, these are not strong advantages. (1) Actually, the number of atoms can be sampled from a learned distribution in 3D graph generation model, and VoxMol is also implicitly learn the number of atoms to place in every channel of voxel grids. These two processes are not fundamentally different. (2) It is not a big issue to handle discrete and continuous distributions separately as we can jointly train discrete and continuous generative models (e.g., discrete & contiuous diffusion models). Also, VoxMol need to implicitly learn a "discrete distribution" as for every 3D spatial location in the voxel grid, it needs to decide the atom type, i.e., which channel to place an atom. (3) The simpler and faster training and generation are mainly due to the use of neural empirical Bayes method, which may also be applied to 3D molecular graph generation to achieve fast generation and training. Also, 3D molecular graph generation can be accelerated by some other methods like introducing fragment-based generation. Hence, it is not convincing that the proposed VoxMol method is really advantageous when compared to existing graph-based method. - The use of 3D voxel grids also introduce many disadvantages. 3D voxel grids lose SE(3)-invariance as they are not invariant to rotations or translations. The use of 3D voxel grids hamper the introduction of chemical domain knowledge to guide the generative models to produce chemically valid or drug-like molecules, such as adding constraints based on fragments. The effect of these disadvantages should be evaluated or discussed. - The experimental results in Table 1 do not demonstrate the advantages of the proposed VoxMol method as VoxMol does not outperform strong baseline methods in several metrics. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: In experiments, methods are evaluated by the average number of aromatic rings per generated molecule. Do aromatic rings are detected from 2D molecule graphs inferred from 3D structures? Are the generated aromatic rings are checked in 3D space to decide whether they respect chemical constraints (i.e., all atoms in an aromatic ring should be in the same plane)? Why this metric is used? What can be shown if a method can generate more aromatic rings? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and suggestions. Below we address additional concerns from the reviewer. **Implicitly learning the number of atoms is not an advantage:** We agree with the reviewer that VoxMol is implicitly learning the number of atoms (together with all other information about molecules). This might be arguable, but we assumed that learning representations more implicitly/end-to-end is simpler and can generalize better—therefore, we counted this as an advantage (R4 agrees: "This comes with benefits, chief among which is the ability to generate without a known number of atoms."). In applications related to in-painting (eg, pocket conditioning, linking, scaffold conditioning), learning implicitly the number of atoms could be very helpful. We will tone down this contribution by mentioning instead that VoxMol learns implicitly the number of atoms (in contrast to EDM-based methods, that need to learn them explicitly). **Score functions on continuous-only vs continuous/discrete space:** Because the score function is not defined on discrete spaces, it can be non-trivial to adapt the score-based modeling to categorical data. This is currently an active research area (eg [E-H]) and we still don't have a clear winner solution to the problem. It has also been pointed ([F]) that it can be more challenging to condition the generation of score-based models on discrete spaces. Therefore, as we agree with the reviewer that it is possible to learn score function on continuous-discrete space, we believe that training only on continuous space is simpler. **Simpler training because to NEB is simpler than diffusion:** We agree with the reviewer. In fact, we mention in the text that VoxMol (based on NEB) is simpler to train than _point-cloud diffusion models_ (L54-55 and L154), but not a general advantage of voxels over point clouds. We also point out that NEB requires a very large noise to work and denoising point clouds can be more challenging than denoising discrete grids (U-Nets works extremely well for this task and has been highly fine-tuned for this task with large image generation models). **Experimental results:** The main conclusion from our experiments is that VoxMol underperforms EDM by a small margin on QM9 (Table 1) and outperforms EDM on GEOM-drugs (larger/more realistic dataset) by a larger margin (Table 2). This conclusion holds even further with the MiDi metrics suggested by the reviewers (see Table 1 and 2 on the attached pdf). **"3D voxel grids lose SE(3)-invariance as they are not invariant to rotations or translations."... "should be discussed"** [we assume the reviewer means "equivariance" instead of "invariance". Please let us know if that is not the case]: Our method has built-in equivariance to translation and relies on data augmentation through rotation to learn rotation equivariance (as it has been done before). We mentioned in the manuscript that VoxMol (being based on CNNs) does not have rotation equivariance built-in in the architecture (L113-117 and L217-218). For clarity, we will include an explicit section mentioning the limitations of the model on the revised version of the paper. **Guide generation:** Adding constraints to guide VoxMol generation can in principle be done in the same way as in-painting or guidance is done with diffusion models on images (eg, Song et al. 20, Lugmayr et al. 22). Please see general rebuttal for discussion in guided/conditional sampling. **Measuring number of generated rings:** Hoogeboom et al., 22 (appendix) points that predicting single bonds can easily increase validity. This metric was added to show that VoxMol achieves its performance without "cheating" and oversampling single bounds–in fact, it matches better the average number of rings per molecule from the ground-truth dataset. Fig 8 in the submission shows that EDM undersample aromatic bonds and oversamples single and double bonds, while VoxMol matches better the number of bonds. We recognize the rationale for this metric was not clearly explained in the text. We will move this discussion to the appendix of the paper and report the same metrics as in the MiDi paper, as suggested by the reviewers. [E] Argmax Flows and Multinomial Diffusion, Hoogeboom et al, Nuerips21 [F] Continuous diffusion for categorical data, Dieleman et a., 22 [G] Analog bits, Chen et al., ICLR23 [H] Geometric Latent Diffusion Models for 3D Molecule Generation, Xu et al, ICML23 --- Rebuttal Comment 1.1: Title: Follow-up Responses Comment: I appreciate authors' hard work in the rebuttal. While some of my concerns have been addressed, several key concerns still remain unsolved to me. - I agree that EDM cannot implicitly learn the number of atoms and implicitly learning it is useful in some conditional generation applications. But graph-based autoregressive generation methods can also achieve it by generating a "STOP" token to indicate the stop of sequential generation while not specifying the number of atoms at start. As authors also agree that jointly learning continuous & discrete distributions are not impossible and simpler training comes from NEB instead of image-based training, I suggest to remove the strong claim that image-based generation is better than graph-based generation in the paper revision, but simply treat the image+NEB as an independent method and discuss its unique advantages. - I agree that the number of aromatic rings is a meaningful metric. However, as authors mentioned, aromatic rings are identified by aromatic bonds, so how the aromatic bonds are detected from the generated 3D molecular structures? Are they detected by by interatomic distances? However, even generating a 6-aromatic-bond cycle graph is not enough for an aromatic ring, as by chemistry knowledge, in a valid aromatic ring, all atoms (6 carbon and 6 hydrogen atoms) should be in the same plane. Do authors check if this constraint is satisfied? - Authors are recommended to discuss the effect of hampering fragment-based generation. Here, fragments are those functional groups or subgraphs frequently appeared in molecules, such as aromatic rings or alkene. By chemical knowledge, the 3D structures of these functional groups have many constraints, e.g., containing some unrotatable bonds that fix some atoms in the same plane. These chemical constraints can be well incorporated into graph-based generation methods, as we can first generate 3D "super-node" graph where each super node is a functional group, then refine the 3D atomic structures in each super node (an example of this method is [1]). However, image-based methods cannot achieve it in my opinion, as images separate atoms in a functional group into different channels in a 3D image volume. This leads to a major concern that molecules generated in the form of 3D images may consistently break chemical constraints and are not practically synthesizable. Due to the above concerns, I tend to keep my decision of rejection. [1] Coarse-to-Fine: a Hierarchical Diffusion Model for Molecule Generation in 3D. ICML 2023. --- Reply to Comment 1.1.1: Comment: We thank R3 for the prompt reply. We provide additional discussions in the hope of better answering R3's questions. Below, we clarify the three concerns raised by the reviewer (see below for detailed clarifications): - **Point 1:** we agree with each other. As suggested, we will remove the three initial claims and focus on the model's advantages only. - **Point 2:** we agree the # arom. rings metric is not very informative (we use RDKit to compute it). We follow reviewers suggestions and use MiDi metrics, as it is better for benchmarking. Table 2 on the attached pdf shows that our model clearly beats EDM by a large margin. - **Point 3:** voxel-based models can in principle do fragment-based generation/have chemical constraints (eg, [A, B, C] do this). However, the contributions of this work are (i) proposing a new model for unconditional generation and (ii) beating the SOTA on a large/challenging dataset on unconditional generation. Fragment-based generation is beyond the scope of this paper and can be seen as future applications (same for structure-based or scaffold-based generation). We hope the clarifications above—together with results on GEOM-drugs (better than EDM by a large margin)—can change the reviewer's opinion. If not, please let us know where we can be more clear. Are these three points all the concerns the reviewer have? We would be happy to address any other concern the reviewer might have. --- Detailed comments: - **Comparison with autoregressive (AR) models:** when we mention about learning the #atoms implicitly, we were comparing VoxMol with diffusion models (current SOTA) and not with AR models. AR models have their own advantages/disadvantages (eg, it has not been successfully applied on GEOM-drugs). - **Continuous/discrete spaces:** authors and reviewer agree on this. - **Number aromatic ring:** we computed the number of rings per molecule with RDKit (it is a simple “2D metric”, computed on valid molecules, like many of the other metrics used in this task) and therefore it does not check if atoms are on the sample plane (we observed qualitatively that they do many times). We agree this is not the most meaningful metric and will not report it. Table 1 and 2 on joint pdf show results on MiDi metrics that are more useful for benchmarking models (proposed by reviewers). - **Fragment-based generation**: - Voxel-based models can in principle do fragment-based generation/incorporate chemical knowledge. These constraints can be imposed either (i) during sampling (similar to in-painting, eg, initializing the chain with fragments and keeping them unchanged during sampling) or (ii) apply fragment constraints during training. - _“However, image-based methods cannot achieve it to me, as images separate atoms in a functional group into different channels in a 3D image volume.”_: Previous works have shown evidence that this is possible (eg [A, B, C]). In fact, some chemical priors like 3D pharmacophores are particularly easy to incorporate with voxels. - This paper is about proposing a new model for unconditional generation and beating the SOTA on a large/challenging dataset. Fragment-based generation is beyond the scope of this paper. [A] DeepFrag: a deep convolutional neural network for fragment-based lead optimization, Green et al, 21 [B] Deep generative design with 3D pharmacophoric constraints, Imrie et al, , 21 [C] Incorporating target-specific pharmacophoric information into deep generative models for fragment elaboration, Hadfield et al, 22
Summary: This paper proposes a novel 3D molecule generation routine dubbed VoxMol. The highlight of it lies in its introduction of a connection between traditional molecular graph representation and 3D voxel representation. In VoxMol, the molecules are first translated into voxel representation. A denoising network is then trained to serve as a score approximation. In the sampling process, a novel walk-jump procedure is introduced to sample noisy p(y) and obtain the denoised x. Strengths: 1. To my knowledge, the method is novel in terms of data representation and the training/sampling process. The paper showcases that instead of the traditional graph representation, molecule generation can also be performed on 3D voxels, shedding light on new research directions of connecting 3D CNNs to equivariant GNNs. The authors also tried to recover the molecular graph representation based on the denoised voxel, which I really appreciate. 2. The evaluations are interesting. Though not always state-of-the-art, the authors have exhibited insightful investigations, e.g., how $\sigma$ influences the tradeoff between validity and uniqueness. 3. The overall presentation is clear and the method is very easy to follow. Weaknesses: 1. I feel a little pity that equivariance is not properly injected into the model. I think it would make a much stronger impact if equivariance could be enforced even on this 3D voxel representation so that physical/chemical priors could be ensured. 2. The method does not significantly beat EDM on these benchmarks. But I understand that as perhaps the first work to challenge the task of 3D molecule generation with 3D CNNs, it is difficult to beat all other GNN counterparts at once. Technical Quality: 3 good Clarity: 3 good Questions for Authors: As the authors have stated in the paper, efforts have been made for equivariant 3D CNNs but were unsuccessful. I am curious about what detailed experiments have been conducted and how were the results. I believe by involving equivariance the work will make a stronger contribution. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations are not discussed in the main text. I would recommend the authors add these discussions since I think the work is inspiring and will benefit more from future research. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and suggestions. Below we address additional concerns from the reviewer. **The method does not significantly beat EDM on these benchmarks:** Table 1 and 2 on the attached pdf compares our method with EDM using MiDi metrics (as suggested by reviewers). Although it underperforms EDM on QM9 (by a relatively small margin), Voxmol clearly beats EDM by a large margin on most metrics on the challenging (and more realistic!) GEOM-drugs dataset. These conclusions were the same with the original metrics, but the new metrics show the differences in performance more clearly. We will update the tables and plots of the experiments to reflect the new metrics. **Limitations:** We compare the pros/cons of voxel/CNN vs point-cloud/GNN on the last paragraph of related works (L100-117). We also mention the limitation that it scales cubic with voxel grid dimension (although been constant time given a fixed grid dimension) on L257-259. We would make these limitations more clear by having an explicit "Limitations" paragraph at the end of the manuscript (and also updating with reviewers' remarks). **SE(3) equivariance:** Please see SE(3) equivariance paragraph in general rebuttal. **SE(3)-Equivariant 3D CNN experiments:** We start with the official implementation of [D] and tune several network hyperparameters (related to architecture, optimization and training) so that the network is able to achieve good denoising metrics on QM9. We then use the same procedure to generate samples as described in the paper (only switching the network from the non-equivariant to the equivariant version). We tried different sampling hyperparameters, but we were never able to achieve the same performance as non-equivariant VoxMol (eg, the molecule stability (w MiDi metrics) dropped from .87 to .25 per cent). There might be many reasons why this is the case: (i) the best reconstruction loss we found with the equivariant model is higher than the non-equivariant (~9.4e-5 vs. 5.4e-5 MSE on val. set), (ii) the equivariant model needs more capacity to be competitive to the non-equivariant one (currently it has over 90x less parameters), (iii) something on the sampling procedure needs to be different on the equivariant version (unlikely). What makes it even more challenging is that the equivariant network is less efficient (around 50-60% slower) and we faced issues while scaling up to 64^3 grid size (required for GEOM-drugs). We will include a description of what we tried on this space on the appendix. [D] An end-to-end SE(3)-equivariant segmentation network, Diaz, Geiger, McKinley, 23 --- Rebuttal Comment 1.1: Title: Response Comment: I thank the authors for the detailed feedback. Though there are still spaces to improve regarding this paper, I retain my positive score since the insights are interesting and the approach yields satisfactory novelty. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the reviews and feedbacks. We will update the manuscript to make those points more clear. We particularly appreciate that the reviewer agree that a novel/different method with competitive results _is_ a good contribution for this community.
Summary: This paper proposes doing diffusion on voxel space for molecule conformation generation. Because the voxel space is discrete, an efficient sampling method is proposed. Strengths: This paper proposes doing diffusion on voxel space for molecule conformation generation. Because the voxel space is discrete, an efficient sampling method is proposed. Weaknesses: ## Method: - It is not clear how the SE(3)-equivariant property is guaranteed in the model. This is critical because once the molecules are too large to fit in the voxel grids, we need to rotate it for a better position. - The objective function in Eq 2 is not clear. Concretely, it is not clear how the atom types are trained. ## Experiments: - The authors also referred to MiDi in the paper. I’m wondering why the authors did not add MiDi? Is that because the performance of MiDi is better than VoxMol? - The MiDi paper introduces more domain-specific metrics, and authors should consider add them. - Also a closely-related baseline should be cited at least: https://arxiv.org/abs/2305.05708. ## Minor: - `... capturing long-range dependencies over multiple atoms can become difficult …` It also depends on how the edges are constructured. For point clouds, the edges are often constructured by distance, thus this may not become an issue as long as the spatial distance is close. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I have some questions on the method section, and would like to confirm with the authors. - I am wondering what’s the advantage of using NEB for estimating p(x)? Based on lines 135-140, it seems to be a standard EBM with Langevine dynamics, puls an extra NEB module. Is this correct? - The authors mentioned that `Chains are initialized with Guassian noise with standard deviation`. I would like to double-check what does standard Guassian mean in the voxel grids? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and suggestions. Below we address additional concerns from the reviewer. **Diffusion model:** We would like to clarify that the proposed model is _not_ a diffusion model (this is a big point of the paper). Our model is based on neural empirical Bayes and walk-jump sampling (Saremi and Hyvarinen, 19). Although the model is also a generative model based on score functions, it is fundamentally different than diffusion models. **Advantage of using NEB for estimating p(x):** In general , it is very difficult to use the score function (the gradient of the loglikelihood of P) to sample from P(x) (space of "clean" molecules). The intuition of NEB is that sampling the noisy molecules y from P(y) is much easier than sampling from highly sparse x in P(x). In NEB, the score function is used to sample noisy molecules y. A single chain of VoxMol sampling can generate multiple noisy molecules (with a fast mixing time in practice). We can then "jump" from noisy to clean molecules with the denoiser network. This is in contrast to diffusion models, where the noise is gradually reduced in the reverse diffusion process (thus one chain per single sample). **Comparison w. arXiv2305.05708:** We note this paper appeared 6 days before the submission deadline. We will add this reference to the revised version of the manuscript. What we like about this paper is that—similar to our work—it shows that we can generate molecules with methods that are different from GNN on point clouds. They also show—like we did—that built-in SE(3)-equivariant architecture is not necessary to generate molecules; equivariance can be learned with data/augmentation, as argued in [A, B, C] and related works. **SE(3) equivariance:** VoxMol has built-in equivariance to translation and relies on data augmentation through rotation (as it has been done before) to learn rotation equivariance. Unlike arXiv2305.05708, 3D CNNs can in principle be adapted to have built-in rotation equivariance (Weiler et al, 18). Please see SE(3) equivariance paragraph in general rebuttal. **"…capturing long-range dependencies over multiple atoms can become difficult…":** We agree that it depends on how the edges are constructed. However, we point that EDM considers interactions between all atoms and uses a fully-connected graph (paragraph after equation 11 on EDM paper). We will clarify that this is a limitation of EDM in particular and not GNNs in general. **Eq 2 is not clear. Concretely, it is not clear how the atom types are trained:** Each atom is represented by a different input channel (similar to how R, G and B colors are separated on images). The training is done by denoising every voxel location of every channel of the noisy voxel grids (similar to how image denoisers are applied to every pixel on the three color channels). **Chain initialization:** We describe how we initialize the chain on Appendix A3: "We follow [27] and initialize the chains by adding uniform noise to the initial Gaussian noise (with the same σ used during training), i.e., y0 = N(0,σ2Id)+Ud(0,1) (this was observed to mix faster in practice)". y0 is a voxel grid of tensor dimensions `(n_channels, dim, dim, dim)`, where dim is the dimension of the voxel grid (32 on QM9 and 64 on GEOM-drugs) and `n_channel` is the number of atom channels considered. We will move this to the main text for clarity. [A] Understanding image representations by measuring their equivariance and equivalence, Lenc,Vedaldi, CVPR15 [B] Grounding inductive biases in natural images: invariance stems from variations in data, Bouchacourt,et al,NeurIPS21 [C] The Lie Derivative for Measuring Learned Equivariance. Gruveret al,22 --- Rebuttal Comment 1.1: Title: Follow-ups Comment: Thank you for the replies and correction on the diffusion model. Some minor issues have been addressed. However, my question remains unsolved, and more critical questions come out. Thus, I would like to keep my score. 1. My main concern is on the SE3-equivariance. - Currently, there is no evaluation metric about the SE3-equivariance in the result tables, especially the rotation. Thus, the `outperforming performance` is debatable. - You can think of the main baseline, EDM, as a constrained generation model, where the generation process needs to satisfy the SE3-equivariance, while this work is like an unconstrained generation. - For data augmentation specifically, we can always take rotation as data augmentation. However, as raised in my previous review, what if certain molecules are too large to fit in the voxel grids? How can we guarantee that we can always rotate it to fit in? What about molecules that can never fit it? Will you do truncation? If so, how will this affect the generation? These are the questions critical for data augmentation. - The related works [A,B,C] conduct experiments on images, which I am unfamiliar with. Is there any work providing similar explorations/observations from the chemistry side? - Thus, the claim that `data augmentation is sufficient` is not well supported. If you want to prove that generation with data augmentation is sufficient for SE3-equivariance, you need to design a metric specific for group symmetry and evaluate it for all the methods. 2. Thank you for adding MiDi results, and MiDi is an SE3-equivariant method. There are 18 metrics in total, and MiDi performs better than VoxMol on 15 of them. - As mentioned, MiDi is an SE3-equivariant model, and I'm not sure how authors can do `MiDi+ VoxMol`? Or do you simply mean doing 2D+3D using VoxMol? Then how do you define the covalent bonds in the voxel space? Do they have volumes? If so, then how large? Typically the covalent bonds do not exist in the 3D space, and it's non-trivial to inject them into the voxel space. 3. Yes, as mentioned previously, adding the citation to that paper is sufficient. No need to compare. --- Reply to Comment 1.1.1: Comment: We thank R1 for the reply. We provide additional discussions in the hope of better answering R1's questions (detailed clarifications are posted below). 1. **Built-in SE(3)-equivariance (SE3E):** - Voxel representations have been extensively used in molecular modeling (generative and discriminative). These types of models dont have built-in SE3E but are still useful to the community. - The point of the paper is not to have a built-in SE3E model, but to show that we achieve competitive empirical results without it (as also shown by arXiv2305.05708). - Imposing SE3E on top of VoxMol could improve results even further. Future works could include: (i) design 3D CNN architectures with built-in SE3 equivariance, (ii) connect 3D CNNs to SE3E GNNs (as proposed by R2 and in [AA, BB]). 2. **Comparison with MiDi:** - MiDi is "2D+3D models" built on the top of a "3D model" (EDM). It shows that 3D models can be augmented with connectivity graphs to improve performance. In principle, this should work for other generative models as well. - MiDi leverages more information (2D graphs, formal charges) than VoxMol/EDM during training, so quantitative apples-to-apples comparisons are not fair. - The contributions of this work are (i) proposing a novel model for unconditional generation and (ii) beating SOTA on a challenging dataset, assuming same data is used for training. Leveraging 2D information (on top of 3D atoms) is beyond the scope of this paper. We hope we are clear on the clarifications above. Do the reviewer agree with authors that (i) there is value in showing that a novel voxel-based model can be competitive with SOTA despite not having built in SE3-equi., and (ii) comparison between Voxmol and MiDi are not apples-to-apples since the models are trained on different data? Please let us know if any other clarification is necessary? [AA] Point-Voxel CNN for Efficient 3D Deep Learning, Liu et al, NeurIPS19 [BB] 3D Shape Generation and Completion through Point-Voxel Diffusion, Zhou et al, ICCV21 --- **Detailed clarifications:** 1. **Built-in SE(3)-equivariance:** - `outperforming performance`: By "outperforming on GEOM-drugs" we mean that VoxMol has better empirical results than EDM on generative modeling metrics (MiDi metrics, as asked by R1). These numbers are not measuring any "SE3E property", but it clearly shows that the VoxMol better matches the distribution of GEOM-drugs and generates more realistic molecules. - _What if certain molecules are too large to fit in the voxel grids?_ - With a fixed grid size, voxel representations are limited to a fixed volume in space (L127). We can only use molecules that would fit the grid for training, and the model can only generate molecules that fit the grid. - With a 64^3 voxel grid (at resolution .25A), we can fit over 99% of the drug-like molecules from the GEOM-drugs dataset (L207-209), and train w 1 GPU. Rotating molecules with any rotation does not change that. - Every model has limitations. In the case of EDM, the model sampling time scales with n^2, n the number of atoms (since they use fully-connected graphs). Voxels, on the other hand, have constant inference time wrt n (for a fixed grid size) (L257-259). The tradeoff between EDM/VoxMol wrt to voxel grid length and the n of atoms has been discussed in the main text. We will make it clearer on the revised version. - _Related works [A,B,C]_: this is a very new and active research direction and, as far as we know, they haven't been tested on molecular data yet. This work, in addition to, eg, arXiv2305.05708, shows empirically that there might be something interesting to learn about this on molecular data. - `data augmentation is sufficient`: We agree with R1 and we did not mention it is sufficient, nor wanted to prove it. We pointed to a few references [A,B,C] that show that equivariance can be learned in neural networks (eg, vision transformers are sota on images w/o having built-in translation or rotation equivariances–they learn it). We agree it is future work to see how data augmentation helps learn rotation equivariance. 2. **Comparisons w MiDi:** - By `MiDi+VoxMol` we meant methods that could leverage 2D information on the top of VoxMol similar to how MiDi leverages 2D information on top of EDM. Eg, the model could use GNN for modeling connectivity graphs (2D) on the top of VoxMol (3D) and train them together/separately. It is an open question on how this can be done efficiently. - MiDi leverages more information than VoxMol/EDM during training, so quantitative apple-to-apples comparison is not fair. - Leveraging 2D information (on top of 3D atoms) is beyond the scope of this paper. - Finally, the results R1 is comparing (MiDi-adapt) appeared _after_ the submission of the conference and are (still) unpublished.
Rebuttal 1: Rebuttal: (R IDs: R1=R82i , R2=j1qM, R3=bpDM, R4= zHHZ) We thank the reviewers for the detailed and helpful reviews. In particular, we thank the suggestion of evaluating on MiDi metrics [R1, R4]. The results on these metrics corroborate the findings of the submission: VoxMol performs slightly worse than EDM on QM9 (Table 1 on the attached pdf), while considerably outperforming EDM on harder GEOM-drugs (Table 2). The metrics shows more clearly that VoxMol (i) beats the current state of art on a (arguably) more realistic/useful dataset _by a large margin_, and (ii) seems to be more scalable. We also acknowledge that reviewers find that this work (iii) is novel in terms of data representation, training and/or sampling [R1, R2, R3, R4], (iv) has good evaluations/ablations [R2, R4], (v) is well written [R2, R3, R4], and (vi) can "shed light on new research directions of connecting 3D CNNs to equivariant GNNs" [R2]. PS: we fixed a bug on the rotation augmentation (randomly rotates the axis between [0, 2] instead of [0,2pi] radians), see `_random_rot_matrix()` function on `./dataset/dataset.py` (submitted source code). The bug fix improves the uniqueness score while other metrics mostly remain the same. Table 1,2 shows results with VoxMol trained with the correct rotation augmentation. Next, we address the main concerns from reviewers. **MiDi metrics [R1, R4]:** First, we run the official MiDi evaluation on EDM's .xyz files (provided by authors) to reproduce results from the MiDi paper. Then, we run the same evaluation on VoxMol .xyz files. Table 1 and 2 on the attached pdf show results for QM9 and GEOM-drugs respectively. To summarize: * In QM9, VoxMol performs slightly worse than/similar to EDM in most metrics (except on bond angle W1 where the gap is large). * In GEOM, VoxMol is considerably better (by larger margins in most cases) than EDM in all metrics except on bond length W1, where they perform relatively close. We will use these tables/metrics on the revised version of the manuscript. **Comparison w. MiDi [R1]:** MiDi is built on the top of EDM by leveraging molecular graphs (2D) as well as 3D atoms. Its contributions are therefore orthogonal to ours. In fact, a "VoxMol+MiDi" model could improve results the same way MiDi (3D+2D) improved over EDM (3D). Table1,2 show MiDi results for reference. We note that VoxMol outperforms MiDi (published on ICLR23 MLDD Workshop) on GEOM-drugs in most metrics _without_ using molecular graphs or formal charges information during training. **SE(3) equivariance [R1, R2, R3]:** Built-in rotation equivariance is a good property for a network to have, however equivariance can also be learned with strong data augmentation/larger datasets ([A, B, C] and related works). By choosing a 3D U-Net architecture, we lose built-in rotation equivariance but win on expressiveness of the denoising network (U-Nets have been highly optimized for denoising grids, eg, modern image generation). Our experiments show that an efficient denoiser can scale up better, allowing VoxMol to outperform current SOTA on GEOM-drugs despite not having built-in rotation equivariance. 3D CNNs can in theory be adapted to have built-in rotation equivariance (Weiler et al, 18). Successfully designing SE(3)-equivariant 3D CNNs for this problem can potentially improve results further. Currently, equivariant 3D CNNs do not scale as well as the standard ones (and it is a challenge to apply to large datasets). We hope our results motivate the community to explore more scalable SE(3)-equivariant 3D CNNs. We see these developments as architecture improvements and not as the main contributions of the paper. **Guide/Conditional generation [R3, R4]:** Like diffusion models (DM), our method also leverages (learned) score functions and relies on Langevin MCMC for sampling. Therefore, in theory we can condition VoxMol similarly to how it is done in diffusion models: by constraining the score function as we walk through the MCMC chain. In the case of DMs, the score function of all T steps are constrained to guide the transition steps from noise to a (conditioned) sample. In VoxMol, the constrained score function would affect the "walk steps" (the Langevin MCMC steps): it would restrict the region where the chain samples noisy molecules y to P(y|c) (instead of P(y)), c is the condition (eg, gradient of a classifier). The "jump step" (a forward pass of the denoising network over the noised molecules) is independent of the condition and remains unchangeable. Many of the innovations on conditioning DMs come from computer vision, where U-nets are usually used. Since VoxMol has the same architecture (albeit 3D instead of 2D), many of the conditioning techniques/tricks used in images may be more easily transferable. Eg, we could in principle use the gradient of a classifier (trained jointly) to guide the sampling (using the same trick as in Dhariwal and Nichol 21) or adapt gradient-free guidance (Ho and Salisman, 22). Pocket conditioning could also be possible, as in eg, (Schneuing et al. 22, Guan et al. 23). In-painting (related to linker/scaffold/fragments cond.) has also proven to work very well in 2D U-Nets, so it could potentially work with 3D U-Nets as well. We will add a section in the appendix on how we can condition the generation in different ways. We agree that conditional molecule generation is a more interesting problem in practice. We note that our approach is the first of its kind and the main focus of this submission is to show that (i) it is a feasible approach (this is non-trivial) and (ii) it scales well on unconditional generation, beating current SOTA on a large dataset. [A] Understanding image representations by measuring their equivariance and equivalence, Lenc,Vedaldi, CVPR15 [B] Grounding inductive biases in natural images: invariance stems from variations in data, Bouchacourt,et al,NeurIPS21 [C] The Lie Derivative for Measuring Learned Equivariance. Gruveret al,22 Pdf: /pdf/7c21c2a6c59132920e3744519a7c9242ff5fe514.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Rehearsal Learning for Avoiding Undesired Future
Accept (poster)
Summary: The paper proposes the framework of Structural Rehearsal Models (SRMs) which is similar to Structural Causal Models but not based on causal relations but instead on rehearsation [1]. In this kind of relation, edges can be bi-directional and indicate that some values influence each other. SRMs are then used to tackle the problem of Avoiding Undesired Future (AUF), where the agent wants to manipulate some intermediate variables Z in order to keep the value of final variable Y within the desired range; the problem is formulated using SRMs. Then, the algorithm is proposed for the case with linear dependencies and one-node interventions. Experiments in toy environments are conducted, showing that the proposed algorithm outperforms off-the-shelf RL algorithms. [1] https://link.springer.com/article/10.1007/s11704-022-2900-0 Strengths: [S1] The ideas presented in the paper are novel and seem to be well motivated by the need of having a less strict variant of “causal”-based models, where we still can leverage the structure without requiring strict causality. [S2] The paper is nicely written and well structured, mostly easy to follow. Weaknesses: [W1] I think having a concrete case where you show that your algorithm works well and causality-based methods either cannot be used or work worse is crucial. I am not 100% convinced by the experiments in the paper. Can causality-based methods be used there (if needed, by lumping together the variables from one clique)? If so, some causality-based methods should be included in the comparison. If not, please provide a compelling explanation why this is not feasible. [W2] I was not able to understand how the computation graph is changed when the intervention changes size of some clique; I present more details in [Q1]. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: [Q1] What happens if you have some clique, e.g. with 3 elements, and then one of the elements is used in the intervention? What will be the function to produce values for the remaining 2 elements, and how does it relate to an un-intervened function? Is it derived from the base one, or are there completely new parameters to learn here? In the latter case, there would be 2^c possible parameter sets for each clique of size c, right? [Q2] Section 4.2 - can you please explain how you derive the initial set of graphs G? [Q3] You mention in the paper that your framework enables non-stationary graphs etc. But this is never actually used, and experiments are for the simpler case. Moreover, it seems that this time-dependence could be introduced also for normal SCMs and it is actually orthogonal to the question if you use SCMs or SRMs. Can you maybe elaborate more on that aspect and argue if it is the case that this non-stationarity is more natural in the case of SRMs? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: I suggest to the authors to include a subsection or a paragraph dedicated to listing the limitations, e.g.: focusing on linear case, toy environments, etc. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful comments! Below we address your questions in a point-by-point fashion. > W1. Regarding the experiments and if causality-based methods can be used (if needed, by lumping together the variables from one clique). Thanks for the insightful questions! We want to emphasize that the primary goal of the experiments was not to demonstrate that our method outperforms specific baselines. Instead, as the first attempt to explore rehearsal learning, we aimed to illustrate through the experiments that building decision-making models based on rehearsal relations (rather than relying on correlation or causation) is feasible and sensible, at least for certain problems. The experiments clearly achieved this objective. Regarding the applicability of causality-based methods by lumping together the variables from one clique, we are afraid that it is not a suitable approach. While lumping together variables might seem like an intuitive way to incorporate causality, it results in coarse modeling granularity, making it challenging to find appropriate decisions. For example, if we lump three interrelated variables $V_1,V_2,V_3$ together as a single node $W$ in a causal graph, then the atomic intervention would be a joint intervention on $W=(V_1,V_2,V_3)$ simultaneously. The joint intervention on the three variables can be infeasible or unnecessarily costly for some decision tasks. Furthermore, in this lumping approach, we cannot model size-1 interventions in the "causal" graph by selectively intervening on only one variable, let's say $V_1$, while keeping the other two, $V_2$ and $V_3$, untouched. The reason is that intervening on $V_1$ would result in changes in $V_2$ and $V_3$, given their interrelated nature. In less formal terms, that means performing $do(W=w)$, which actually intervenes on an individual component, could lead to $W\ne w$, which does not make sense in causality. Consequently, this approach is not well-suited for capturing the intricate relationships among interrelated variables. On the other hand, using SRM to model these variables are more appropriate, enabling us to account for fine-grained interventions and accurately capture their effects on the system. > W2 and Q1. Regarding the change of the graph when the intervention changes the size of some clique. Consider an example with a clique of size three. When intervening on one variable, the generating mechanism for the other two variables uses another set of parameters, which is generally different from those in generating the un-intervened function and is not derived from the base one if no further assumptions are made. Hence, in the most general case, there could be $2^c$ possible parameter sets for each clique of size $c$, as you correctly pointed out. In practical implementations, it is reasonable to limit the size of a clique and maintain the number of parameter sets to an acceptable level. Moreover, it is unlikely that we would need to or be able to intervene on every possible subset of a clique. We can save the parameters corresponding to those generating mechanisms that would not be used, thus reducing unnecessary computational burden. Also, we can impose certain assumptions on the relations among the parameters. For instance, it is possible to marginalize the parameters of the base case and use them as priors for the other sets of parameters, leading to more efficient modeling. > Q2. Regarding the generation of the initial set of graphs. The generation of the initial set of graphs $\mathcal{G}$ is given in Section 4.1, where we adopt a bootstrapped-based method. Specifically, we obtain a candidate graph $G$ by sampling from the observed data with replacement and applying a graph learning method on the re-sampled data, such as the preliminary learning method introduced in Appendix B. $\mathcal{G}$ is constructed by repeating this procedure $|\mathcal{G}|$ times. Each graph $G \in \mathcal{G}$ is given equal weight to serve as the prior distribution of the graphs. Subsequently, as we receive examples in each round, we update the posterior distribution of these graphs accordingly. We can then sample graphs from this posterior distribution in the following steps. > Q3. Regarding the time-dependence of SRM. This paper primarily focuses on demonstrating the feasibility of using rehearsal to enhance decision-making in a basic setting. Non-stationary environment modeling is a more general and practical consideration but is also a much more complicated topic that requires further research and investigation. Intuitively, it is possible to model environments at time $t$ with a different causal graph and describe them with multiple SCMs. But the temporal nature is not inherent in SCMs. And current temporal causal modeling generally assumes that the causal structure in a dynamic system simply repeats over time, which is different from the situation we considered where the relations among variables can evolve over time. As for SRMs, since it is designed for modeling decision-making tasks, where time is an indispensable component, it is natural for them to have intrinsic time dependence. Moreover, an interesting observation that may warrant further exploration in future studies is that the interrelated variables in SRMs can provide hints for the possible evolution of variable relations. In an evolving dynamic environment, the change from $A\rightarrow B$ to $A\leftarrow B$ may undergo an intermediate stage where the relation is $A\leftrightarrow B$, which is not considered in causal modeling. From this perspective, SRMs seem to be a more natural choice for describing such evolutions and incorporating structural knowledge into decision-making tasks. Thank you again for your insightful feedback! We hope that the explanations address your concerns appropriately. We will also incorporate the above discussions into the revised version of the paper. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my questions. I have decided to raise my score to 6.
Summary: This work presented a rehearsal learning framework to avoid undesired future. The framework was characterized by a probabilistic graphical model called rehearsal graphs and structual equaitons, and the actionable decisions that enable the outcome to be altered are found under a bayesian famework, and the correct bound to quantify the associated risk was offered. Strengths: 1. The problem that the paper is trying to solve is important, how to exploit the data generating mechanism to improve decision-making efficiency is of great importance but not yet fully addressed. 2. The paper is generally well written and clearly motivated. Weaknesses: I think the biggest problem of this paper is the difference between the Strcutural Rehearsal Model (SRM) and Structural Causal Model (SCM). Authors claimed that SCM cannot address the dynamic problem with time involved which is not true. There are a large number of existing work which investigate the causal structure learning / inference problem for dynamic systems and they can be leveraged for temporal decision making. The most widely used model is called Granger Causality Model and its variants. I don't see essential difference between SRM and SCM, if the only difference is the so called dynamic issue claimed in the paper. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. I think a basic computational costs should be evaluated for the proposed algorithm. The experiments should add more comparison between the proposed method and existing methods. 2. Could authors please include temporal causal models as baselines? I would also suggest that authors should compare the proposed method with causal bandit methods. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: see the weakness and question sections. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments! Below we address your questions in a point-by-point fashion. > W1. Regarding the difference between the Strcutural Rehearsal Model (SRM) and Structural Causal Model (SCM), and the dynamic issue in decision problems. Thanks for raising the question! We are sorry for the possible misunderstanding raised by the word "dynamic". We want to clarify the distinction between the "dynamic" setting considered in the causal discovery and causal inference literature and the "dynamic" setting we refer to in this work. In the causal discovery and inference literature, the "dynamic" setting typically involves dynamic systems that expand over time. Related methods commonly assume that the causal structures among variables at consecutive time fragments remain unchanged and simply repeat over time, so that valid causal discovery and inference on time-series data are possible. However, in our work, when we mention the "dynamic" and time-dependent nature of decision-making, we mean that the relations among the same group of variables can evolve (or change) at different time segments. For example, offering a discount may attract more customers initially, but as time evolves and an economic recession occurs, the relations can change: offering the same discount may be useless. This evolving aspect is not considered in previous work on causality. We will modify the description in the revised version of the paper to make it more comprehensive. Furthermore, in addition to the "evolving" nature of SRM, the fundamental difference between SRM and SCM is the distinct relations they described, namely rehearsation and causation, which we gave a detailed motivation and explanation in Section 2 and Appendix B. Compared to SCM, SRM additionally considers variables that are interrelated, altering each of them would render other variables to change, which can be inappropriate or inconvenient to model with causal relations. Moreover, we want to emphasize that this work is the first exploration into rehearsation and rehearsal learning. This study has already successfully demonstrated the possibility of building decision-making frameworks based on rehearsation, and revealed the potential advantage over other methods for problems with extremely sparse interactions. We believe that this work could open up exciting avenues for meaningful future investigations into rehearsal modeling-based decision-making. > Q1. Regarding computational costs and comparison with more baselines. The average running time of the proposed method on two datasets for $T=100$ rounds is 228.4 and 332.6 seconds, respectively. As the running time can vary a lot with different learning components used in the framework, the actual running time may not be very informative. Hence, we give a simplified analysis of the running time complexity in the response to Q1 raised by Reviewer LZtj. The overall time complexity is about $O(|\mathcal{G}|\text{poly}(d)+T(B d^2 + n' |\mathcal{G}| + d n \log n))$, where $d$ denotes the number of variables and $n$ and $n'$ are respectively the sampling sizes in the candidate action finding step and the Bayesian update step. The time complexity reveals that the proposed method scales polynomially with the number of variables. As for comparisons with more baselines, we conducted new experiments with the two settings in Section 5 using classic reinforcement learning methods SAC and PPO. The average success probabilities are presented in the following table: | | SAC | PPO | DDPG | CATS | Ours| | --- | --- | --- | --- | --- | --- | |Ride-hailing | .177 | .154 | .173 | .104 | .714 | |Bermuda | .205 | .190 | .230 | .215 | .679 | As observed, baselines do not achieve desirable performance, which is probably due to the scarcity of interactions in the AUF problem. However, we want to emphasize that the main purpose of the experiments was not to demonstrate that our method outperforms other baselines. Instead, as the first exploration, we want to show with the experiments that it is possible and sensible to build decision-making on rehearsal relations (instead of correlation and causation), at least for some problems. We hope this work could inspire future investigations into rehearsal-based decision-making. > Q2. Regarding including temporal causal models and causal bandits as baselines. Thank you for the suggestion! Solving AUF with causal methods requires the method to be capable of optimizing the probability of belonging to a desired region with unknown causal structures. Unfortunately, we explored various temporal causal methods, but none of them can be applied. Regarding causal bandits, most studies assume a known causal graph [1,2,3], which is not available in our setting. We came across a very recent work [4] that does not require knowledge of the causal graph, and we conducted new experiments using this method. The average success probabilities obtained were far from satisfactory, achieving 0.073 and 0.066 on the two datasets, respectively. We believe that the poor performance is mainly attributed to the fact that causal bandits aim to find a universally good action (arm) with an optimal expected reward, while in the AUF problem, the optimal action depends on the observed $X$, and there is no single action that performs well across all circumstances. Thank you again for your valuable feedback! We hope that the explanations address your concerns appropriately. We will also incorporate some of the above discussions into the revised version of the paper. [1] Lattimore et al. Causal Bandits: Learning Good Interventions via Causal Inference. NeurIPS, 2016. [2] Lee and Bareinboim. Structural Causal Bandits: Where to Intervene? NeurIPS, 2018. [3] Lee and Bareinboim. Structural Causal Bandits with Non-manipulable Variables. AAAI, 2019. [4] Malek et al. Additive Causal Bandits with Unknown Graph. ICML. 2023. --- Rebuttal Comment 1.1: Comment: Thank you very much for the clarification. I am willing to increase my score to 5 as authors' clarification partially addresses my concerns.
Summary: The authors present a formulation called the rehearsal learning framework to study problems where reasoning about undesirable future outcomes can be leveraged to avoid those undesirable futures---a kind of forward-looking counterfactual reasoning. The authors additionally show how decisions can be made within this framework using Bayesian methods, and prove some PAC bounds. Strengths: Originality: This work represents a fairly novel formulation of a decision problem, building upon prior work in probabilistic graphical models and causal modeling. Quality/clarity: This work is fairly clear, though it is also fairly dense. Significance: This particular formulation seems important, and is underexplored relative to the more traditional scenarios. I suspect this method will be an influential launch-point for future work on rehearsal learning. Weaknesses: * The authors could probably spend more time comparing their method to other classical decision problem baselines (e.g., SAC / PPO, etc.). There's also something of a cottage industry of sample-efficient methods in the RL literature that are probably relevant here, and it's unclear how much this method is buying you relative to these. In short, it makes it difficult to situate how good this method really is, without a comparison to a more familiar method or problem. That said, the experiments provided by the authors in their chosen domains are certainly compelling, and do seem to provide strong evidence of the efficacy of the method. I just worry the authors have drawn a small box around the problem they care about which exactly coincides with where their method wins, and not terribly well overlapping with the space of problems people actually care to solve. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * On line 30ish, the authors compare and contrast their formalism of AUF with the usual sequential decision paradigm of RL. I'm not sure I fully buy this distinction, and I'm pretty sure MDPs can capture exactly as much structure as the authors suggest their formalism is capable of capturing (After all, RL has been applied to and solved many difficult games that have long term structure to be modeled, it's just usually the case that any decisions that might lead to negative rewards far in the future are absorbed into some value function after having seen that detrimental effect happen enough times in, e.g., simulation, and not via some explicit counterfactual/graphical reasoning process). Could the authors clarify these sentences, or perhaps change the emphasis to be more one of sample efficiency, because I'm not sure I fully buy the strength of the current language? * The formalism of "rehearsation" proposed by the authors starts looking very similar to the process of, e.g., performing rollouts within a simulator to see if some trajectory stays within some feasibility region, or satisfies some kind of safety constraint (especially common in the robotics literature). Could the authors comment on how they see their work interacting with this line of work? * Could the authors provide some analysis of how the method's performance fares as the assumptions around low sample efficiency are relaxed? Table 1 clearly indicates the superiority of the method, but surely DDPG starts winning after the number of interactions with the environment is increased. Or do I misunderstand the relevant limits? (Regardless, it's always a great idea to better articulate where the method starts to fail!) * The authors are selling their method as one that requires fewer interactions with an environment, but also demonstrate pretty strong sensitivity of their method to the hyperparameter $\tau$ (which is obviously problem dependent). Could the authors comment here? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive comments! Below we address your concerns and questions in a point-by-point fashion. > W1. Regarding comparison with other baselines and sample-efficient methods (e.g. SAC / PPO). Thanks for the suggestion! We conducted new experiments with SAC and PPO. The average success probabilities are: | | SAC | PPO | Ours| | --- | --- | --- | --- | |Ride-hailing | .177 | .154 | .714 | |Bermuda | .205 | .190 | .679 | We see that limited sample sizes pose challenges for baselines to achieve desired performance. However, we want to clarify that the main aim of the experiments was not to demonstrate that our method outperforms baselines. Instead, we want to show that it is possible and sensible to build decision-making on rehearsal relations (instead of correlation and causation), at least for some problems. It is more like a by-product that the proposed formulation has benefits for some problems with scarce data. And we hope this work could inspire future investigations into rehearsal-based decision-making. > Q1. Regarding the distinction between rehearsal formalism and RL, and the structures that MDP can capture. We acknowledge that MDP-based RL methods can be applied to problems like AUF (Lines 30-31). The MDP formalism surely can capture enough information with sufficient modeling granularity. However, MDP may not be the most appropriate language for certain structural information. Let's consider an example with two actionable variables $A$ and $B$, and an outcome $C$ with the structure $A\leftarrow B \leftrightarrow C$. An MDP model needs to specify the transition probabilities between all possible 3-dimensional states and consider all actions $(A,B)=(a,b)$. On the other hand, by utilizing rehearsal graphs, we know that actioning on $A$ does not have any effect on $C$, and thus we can safely consider actions on $B$ only. Both formalisms are capable of capturing all information involved in the decision process, but in some scenarios, the proposed formalism can reveal more direct and helpful information. Furthermore, as you pointed out, the ability to capture structural information is closely related to sample efficiency. If there are sufficiently many samples, the two formalisms should contain a similar amount of information. But if the sample is limited, which is the main setting considered in this work, using the proposed formalism could be more beneficial. > Q2. Regarding the connection between rehearsation and rollouts. The rehearsal operation and rollouts share some similarities: both involve evaluating the effects of actions by activating the actions in some environments. Their difference lies in the underlying rationale. Rollouts are more akin to an empirical test of a policy, though conducted in simulators rather than real environments. While the rehearsal operation is rooted in the rehearsation relation, which is believed to be useful for decision-making and lies between correlation and causation. By performing rehearsal on a fully specified SRM, we can gain a precise understanding of how the effect of actions is propagated through the rehearsal links and what happens if we manually cut off the links. This level of understanding can be highly informative in guiding decision-making for some critical applications. In contrast, rollouts can be less strict: the simulator can even be some neural networks or other black-box simulators, whose inner mechanisms may not be transparent but are enough for evaluating a policy in some problems. Hence, the precise rehearsal operation and the flexible rollouts may complement each other and offer unique advantages in different problems. > Q3. Regarding how the performance fares as the assumptions around low sample efficiency are relaxed. We conducted new experiments with increased $T$. The average success probabilities of DDPG and our method on the Bermuda dataset are presented below: | T | 100 | 500 | 1000 | 4000 | 7000 | 10000 | 15000 | 20000 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | DDPG | .230 | .243 | .255 | .516 | .642 | .688 | .707 | .708 | | Ours | .679 | .687 | .693 | .709 | .703 | .707 | .708 | .709 The performance of DDPG increases as $T$ gets larger, and catches up with the proposed method around $T=15000$. The proposed method also benefits from increased samples and maintains success probabilities greater than 0.7 after $T=4000$. The stability of the proposed method with increased samples is easy to understand: the learned SRMs become more accurate with more data, which in turn helps find good decisions. > Q4. Regarding the sensitivity to $\tau$. $\tau$ represents the user's desired level of certainty in avoiding undesired future. A smaller $\tau$ indicates that the user is willing to accept a lower success probability. As long as the success probability is close to $\tau$, we should consider the user's requirement as satisfied, and the problem as successfully addressed. From this perspective, the proposed method succeeded with $\tau=0.3, 0.5, 0.7$, and does not show a strong sensitivity. As for cases with $\tau=0.9$, as explained in our response to Q2 raised by Reviewer 9i1V, the drop in performance was due to the specific implementation. With an alternative implementation, the method achieved improved success probabilities of 0.714 and 0.727 on two datasets. As you pointed out, if the goal is to maximize the success probability instead of matching a fixed preference, the optimal choice of $\tau$ will heavily depend on the specific problem. The user can start from a large $\tau$ and see if the constraints are satisfiable. If unsatisfiable, the decision-maker can lower $\tau$ and restart the process. This flexibility allows the user to fine-tune the decision-making process and adapt it to specific problems. Thank you again for your valuable feedback! We hope that the explanations address your concern appropriately. We will incorporate the above discussions into the revised paper. --- Rebuttal Comment 1.1: Comment: I thank the authors tremendously for their detailed response. I have raised my score to a 7 in response.
Summary: This paper presents a new graphical architecture, called SRM, whose goal is to be in-between correlational studies and causality models that are SMC. The idea is that identifying variables influenced by decision on other variables is easier than causality learning, and sufficient for decision making. The decision model is supposed to make decision in different time steps, in order to avoid ending up in undesirable states. To do so, a new family of graphs is introduced, for which are provided learning and inference algorithms, at least in the case where structural equations are linear with Gaussian noise and where inferences are of the AUF (Avoiding Undesirable Future) kind, which consists in maximizing the probability of the outcome being in desirable states over time-steps. Strengths: * The article provides a generic abstract framework to make decision-making in a dynamic environment, while providing means to make inferences and find adequate decision * First experiments are provided that demonstrate the performances of the algorithm, that seems to work as expected except for high required success rates, in which cases it breaks down (due to the drop of the success constraint) Weaknesses: * Comparison with other decision dynamic probabilistic models: the approach mainly deals with the problem of making decisions in a dynamical setting witht he wish to attain given performances. Other frameworks that come to mind when reading the paper are Markov Decision Processes (where the goal would be to maximize the average hitting time of some states) and also Dynamic influence diagrams (or even Dynamic Bayesian Networks with the possibility of interventions as decisions). As those are dynamic tools allowing to solve decision problems, it is unclear why they cannot apply to the current problems and frameworks? * Effects of hyper-parameters: the complete learning + inference scheme requires a lot of parameters to fix/tune, and it is unclear how much efforts must go into determining those for the algorithms to work. For instance, how does the sampling of graphs as well as the number of graphs in $\mathcal{G}$ affect the efficiency of the algorithms? If $\mathcal{G}$ is small, the posterior could have only a few positive values, and it is not clear wether this is a problem at all? The same can be said for other steps, as those all require some approximations. Not much is given in this respect in the experiments, including in the supplementary materials. Technical Quality: 3 good Clarity: 3 good Questions for Authors: As this is not my main area of expertise (to say the least), I have only a few questions besides those raised by the mentioned weaknesses: * At present, it is unclear to me whether the data sets are synthetically generated or not? If so, is there not a bonus to the presented method to use data generated from SRM? * Is there no way to prevent the observation made in the experiments, i.e., to have a drastic loss of performance (in terms of success probabilities) if the constraint on the sucess probability cannot be satisfied? Cannot we rather have a bi-objective function scaled into one? It seems very extreme to have only a 0.1 probability of success when pre-specifying a wished probability of 0.9. * From a decision-theoretic perspective, maximizing the probability of being in a state for one output variable can be seen as a very peculiar decision problem, and in practice one often wish to maximize some expected utility whose value may depend on many factors/variables. In particular, previously mentioned models such as MDP or reinforcement learning methods allow to specify such targets. Is the current framework restricted to probabilities of belonging to a state? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are reaonsably discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful comments! Below we address your questions in a point-by-point fashion. > W1. It is unclear why other tools cannot apply to the current problems and frameworks. Thank you for raising this point! Indeed, other decision tools such as MDP can be applied to the AUF problem. We have conducted experiments with the MDP-based reinforcement learning (RL) method DDPG and contextual bandit method CATS in the original version and added two RL methods in the response to W1 raised by Reviewer LgnQ. However, as suggested by the experiments, in problems like AUF where the number of interactions is extremely sparse and limited, traditional modeling tools that do not leverage structural information may suffer from data scarcity, and resorting to causal knowledge could sometimes be inappropriate and excessive. Therefore, the use of rehearsal relations is a more suitable choice. Moreover, we want to emphasize that one of the key contributions of this work is providing evidence that showcases the feasibility of building decision-making models based on rehearsal relations. We believe that this finding can serve as a starting point for promising future investigations into the proposed rehearsal learning framework. > W2. Regarding the effects of hyper-parameters. The main hyper-parameters in the proposed framework are the size of $\mathcal{G}$ and the number of samples in each approximation step. Unfortunately, the effect of these parameters is obviously problem-dependent and may not have a universally applicable answer. To empirically explore the impact of these hyper-parameters, we conducted new experiments using the two settings in Sec. 5. The method exhibited similar performances with the number of samples ranging from 100 to 10,000. But since samples used in approximation are mainly drawn from the posterior distribution, whose computational costs are minor compared to the costs of taking real actions, it is reasonable to set a large sample size. And when doubling the number of graphs (originally set to the size of a learnable equivalence class), we observed that the posterior probability quickly concentrated on a small set of graphs, which had a limited effect on the subsequent rounds. The final average success probabilities for $\tau=0.7$ were 0.707 and 0.682 for the two datasets, respectively, almost matching the previous performance. On the other hand, halving the size resulted in degraded performance (0.577 and 0.474). The rationale behind the results is probably that graphs in a small set $\mathcal{G}$ are likely to be all far from the true graph, leading to poor approximation. In contrast, for a large $\mathcal{G}$, the weights of unreliable graphs diminish after only a few rounds and have little impact on the future learning process. These findings suggest that setting a large $\mathcal{G}$ could be a good choice if there are sufficient computational resources. > Q1. Whether the data sets are synthetically generated or not? If so, is there not a bonus to the presented method to use data generated from SRM? Yes, the datasets were synthetically generated since otherwise we would not be able to evaluate the performance of all methods due to the lack of true environments. The data were generated from SRMs because SRM is an appropriate way of describing the data-generating process in many problems. However, it is worth noting that the data can also be equivalently generated using an MDP formulation with sufficiently many (or continuous) states and actions. From this perspective, we think that the data-generating mechanisms should not be taken as a bonus for the proposed method. Instead, it is an inherent advantage of the proposed method to be able to find and utilize the structural information from data. > Q2. Is there no way to prevent the drastic loss of performance if the constraint on the success probability cannot be satisfied? Cannot we rather have a bi-objective function scaled into one? The drastic loss of performance is certainly preventable, and the bi-objective you mentioned is surely a feasible solution. As explained in the last paragraph in Sec. 5, the reason for the drastic loss is that in our implementation, once the method finds the constraint cannot be satisfied, it drops the constraint and just find an action that could maximize the mutual information. We also mentioned an alternative implementation for this situation in Lines 262-263, where we recommend lowering the parameter $\tau$ and restarting for the current round. To resolve your concern, we conducted experiments with the alternative implementation for $\tau=0.9$. We decreased $\tau$ by 0.05 when the constraints were not satisfiable. The average success probabilities were then 0.714 and 0.727, which do not exhibit a drastic loss of performance. > Q3. Is the current framework restricted to probabilities of belonging to a state? No, the framework can be easily extended to handle other objectives. The SRM modeling and optimizing for an objective are two relatively independent components. For example, we can replace the constraints on the probability of belonging to a state in Eq. (6) with constraints on other user-defined expected utilities. The following Bayesian updates and decision-finding can readily adapt to this scenario with some modifications. The reason we currently use the probabilities of belonging to a state instead of other expected utilities is that, in this initial work, we consider the "undesired future" as a discrete binary variable. More general extensions, including using other reward functions and different action costs, are reserved for future studies and exploration of the rehearsal learning framework. Thank you again for your insightful feedback! We hope that the explanations address your concern appropriately. We will also incorporate the above discussions into the revised version of the paper. --- Rebuttal Comment 1.1: Title: Thank you for the answers Comment: Dear authors, Thanks a lot for the various answers, which confirms my positive opinion of the paper. I will not necessarily change my score, as I still think the paper deserves an accept, but I will raise my confidence level, as I am definitely more willing to defend my score and position thanks to the clarification. Best regards
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper argues that in decision-making, correlation is usually not enough but causation can be excessive. It introduces the idea of “rehearsation” which is a compromise between the two. Specifically, the paper proposes a novel rehearsal learning framework which models the interactions between interrelated (but not necessarily casually linked) variables in a dynamical system called structural rehearsal models (SRM). This frameowrk is applied to the problem of Avoiding Undesirable Future. A Bayesian inference framework is adopted to learn the graph posterior of the SRM. Mutual information maximisation is used as a criterion to select the alterations (similar notion to intervention in the causal setting) from a candidate set. Finally, a PAC bound is derived to quantify the decision risk in this framework. Strengths: * The paper introduces the new paradigm of “rehearsation” to decision making problems. To the best of my knowledge, this is a novel contribution. * The PAC bound on the posterior is useful analysis and also practical in terms of quantifying uncertainty. * Considering the linear case is a useful first step in building this framework. Weaknesses: * The main issue for me is the lack of comparison to similar methods in the causal inference literature, e.g. Causal Bandits/BO. The rehearsation framework is presented as a more sound framework for tacking AUF, but causal method can still be used as a practical for this problem even if they are philosophically not the principled methods to use. I would have liked to see a comparison against at least one prominent method in this area such as Causal BO. * Similarly to my point above, I would have liked to see a bit more discussion on the strengths and limitations of this work compared to other methods that the authors mention such as Reinforcement learning and Causal Bandits. This could be on a practical level rather than a theoretical one. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: * How does the algorithm scale with the number of variables in the graph? * Can the rehearsation formulation be used for other decision making problems than AUF? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Please see weaknesses section Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive comments! Below we address your questions in a point-by-point fashion. > W1. Regarding comparison to methods in causal inference literature, e.g. Causal Bandits/BO. Thanks for your suggestion! We need to clarify that causal bandits (CB) and causal Bayesian optimization (CBO) methods are not applicable to AUF for two reasons. Firstly, CBO and some CB methods assume known causal graphs [1,2,3]. However, the causal graph is not provided as input in AUF, and a causal graph that faithfully describes the relations among the variables may not exist at all (e.g., the interrelated relations captured by rehearsal graphs). As causal graphs are not available in AUF, such methods are not applicable. Secondly, the optimal decisions in AUF depend on the observed evidence $X$. However, current CB methods do not consider the existence of $X$ and only seek an optimal arm that maximizes the expected reward. Such approaches are unlikely to yield good results, as there is no universally optimal decision (arm) for all possible circumstances represented by $X$. To verify the above points, we conducted new experiments with a recent CB method that can handle unknown causal graphs [4]. As we had anticipated, the average success probabilities were respectively 0.073 and 0.066 for two datasets, which are far from satisfying, indicating that CB is not suitable for AUF. Additionally, we want to emphasize that the main purpose of the experiments was not to show superior performance but rather to demonstrate the feasibility of solving decision-making problems with rehearsal modeling. The proposed framework yielding favorable results indicates that this work could lay a foundation for promising future investigations into this topic. > W2. Regarding the strengths and limitations. The strengths of our work stem from the innovative use of rehearsal relations, which effectively help reduce the required number of interactions, as demonstrated in the experiments. As mentioned in Sec. 1, the success of reinforcement learning (RL) relies on a large number of interactions with the environment, which is feasible for game playing but can be unsuitable for problems like AUF, where interactions are sparse. In such cases, leveraging structural knowledge to enhance decision-making is a natural consideration. This aspect shares some similarities with causal bandits, where structural causal knowledge is used. However, causal knowledge can be too excessive and is difficult to obtain. Thus, we propose using the rehearsal relation, offering a more flexible approach yet still providing decision-making benefits. Another advantage of our proposed method is its ability to make decisions in the face of uncertainty, as it does not presume knowledge of the true underlying graph. As for limitations, a notable one of the current work, which could be addressed in future studies, is that sequential decision-making has not been considered, while RL methods excel in handling such scenarios. We think this limitation does not interfere with the main purpose of this work, which is to demonstrate the feasibility of rehearsal learning. We believe that this is a promising aspect that can be addressed in future research. > Q1. How does the algorithm scale with the number of variables in the graph? This is indeed an important question in practice. Since the exact characterization of the framework is intricate, we next provide a simplified analysis. Let $d$ denote the number of variables. We divide the running time into two parts. The first part is learning $\mathcal{G}$. For a practical graph learning procedure, it usually has a running time of $O(\text{poly}(d))$. Building $\mathcal{G}$ therefore consumes $O(|\mathcal{G}|\text{poly}(d))$ time. The second part is the update and decision finding in each round. Consider a simplified setting with only directional edges. The number of parameters is $O(d^2)$ as there are at most $O(d^2)$ edges. Let $O(B)$ denote the running time of a Bayesian updating method for one equation, like Bayesian linear regression. The time complexity of decision finding is $O(d n \log n)$ (see Sec. 4.2). The time complexity of the second part is then roughly $O(B d^2 + n' |\mathcal{G}| + d n \log n)$, where $n'$ is the number of samples for updating the graph posterior. The overall complexity for $T$ rounds is thus $O(|\mathcal{G}|\text{poly}(d)+T(B d^2 + n' |\mathcal{G}| + d n \log n))$, which scales polynomially with $d$. When considering bi-directional edges, it is reasonable to restrict the size of cliques to a constant to reduce modeling complexities. In this case, the number of parameters can be bounded by $O(poly(d))$, and the overall running time could still scale polynomially with $d$. > Q2. Can the rehearsation formulation be used for other decision-making problems than AUF? Yes, the formulation can adapt to various decision-making problems. The discussion primarily focuses on AUF because the scarcity of interactions in AUF necessitates the use of structural knowledge, and the rehearsation formulation is a good choice. But the rehearsation formulation, especially the SRM, is quite general. We can describe the variables in other decision problems with SRMs as well and modify the objective functions. For example, if the goal is to maximize the outcome instead of AUF, the expected outcome could be placed into the constraint Eq. (6), and the following optimization steps still apply. Thank you again for providing valuable feedback! We hope that the explanations address your concern appropriately. Some of the above discussion will also be incorporated into the revised version of the paper. [1] Aglietti et al. Causal Bayesian optimization. AISTATS, 2020. [2] Sussex et al. Model-based Causal Bayesian Optimization. ICLR, 2023. [3] Lee and Bareinboim. Structural Causal Bandits: Where to Intervene? NeurIPS, 2018. [4] Malek et al. Additive Causal Bandits with Unknown Graph. ICML. 2023.
null
null
null
null
null
null
Thought Cloning: Learning to Think while Acting by Imitating Human Thinking
Accept (spotlight)
Summary: This paper develops a method for "thought cloning", which involves imitating a human's thought process during task performance. The authors apply this to a partially observable 2D grid world domain. They show that thought cloning yields superior performance compared to behavior cloning. Strengths: - The approach is innovative, and could be highly impactful once it is scaled up. - The paper is clearly written. - The methods are sound, as far as I can tell. - The application to AI safety and interpretability is interesting and increases the significance of the contribution. - The paper makes a strong empirical case that their approach works, at least for gridworlds. - I liked the cognitive science motivation, even if I might quibble with some of the arguments. Weaknesses: - It would have made a stronger contribution if the authors could show that this method scales up to more complex domains, though I appreciate that introduces many new difficulties and they wanted to show that the approach works on simpler domains first. - I think the authors are a bit loose with their arguments about human cognition. It is hotly debate to what extent cognition depends on language in a strong way. It also important to note that one can endorse a "language of thought" hypothesis about high-level cognition without endorsing the hypothesis that this corresponds to natural language. In any case, I appreciate that these points have little bearing on the substance of this paper. - Eq. 1 could benefit from more explanation. - Please state what error bars show in figures. UPDATE: the authors have addressed my comments. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I think the paper is basically good as is. If it were a longer format, I would ask about how the authors would scale up their approach (as they discuss briefly in the paper). UPDATE: the authors have addressed my comments. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes, the authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comprehensive review of our paper and your acknowledgment of the strengths of our work, including the novelty and soundness of the method, the strong empirical case, and the contribution to AI Safety and Interpretability. We appreciate the depth of your feedback and are pleased to note your positive assessment. We address your concerns and suggestions below. ***“It would have made a stronger contribution if the authors could show that this method scales up to more complex domains, though I appreciate that introduces many new difficulties and they wanted to show that the approach works on simpler domains first.”*** We think this is a great direction for future work. As you say, that would introduce additional difficulties and we wanted to show (and study) the approach on a simpler (and easier to analyze) domain first. We believe the best domain to do the science for this paper introducing the method was BabyAI. It is in fact a quite challenging domain, where there are complicated language-described tasks and long-horizon action controls, but it also has other key attributes, such as the ability to generate synthetic thought data and being easy to analyze. We believe the core principles of Thought Cloning hold promise in more challenging domains, and that much interesting future work exists to see how it performs in such domains. ***“I think the authors are a bit loose with their arguments about human cognition. It is hotly debate to what extent cognition depends on language in a strong way. It also important to note that one can endorse a "language of thought" hypothesis about high-level cognition without endorsing the hypothesis that this corresponds to natural language. In any case, I appreciate that these points have little bearing on the substance of this paper.”*** Thank you for your comments. Please see the general reviewer response, where this is addressed. ***“Eq. 1 could benefit from more explanation.”*** Thank you for your comment. We will add the following text in L104 in the manuscript. “The first part of the loss is the Thought Cloning loss, where the Upper-level Component is conditioned on the history of thoughts and observations, and the mission, to predict the thought. That thought is then compared with the ground truth thought in the dataset. The second part is the action loss, where the Lower-level Component is conditioned on the current thought, the history of observations, and the mission to predict the action to do, and we then calculate the loss by comparing the predicted action to the ground truth action in the dataset. ” ***“Please state what error bars show in figures.”*** Thank you for pointing this out. The error bars are the 95% confidence interval from five runs of experiments. We will add the explanation in the related figure captions. Thank you once again for your review and positive comments. Your score is already high and we appreciate that, but we wonder if you feel the paper is stronger having read our responses if you might consider increasing it further to help its chance of being published and shared with the ML community. We deeply thank you for your consideration. --- Rebuttal Comment 1.1: Title: response to rebuttal Comment: I thank the authors for addressing my comments. I believe the paper should be accepted, and I'm raising my score to 9. --- Reply to Comment 1.1.1: Comment: Thank you very much. We deeply appreciate your time, insightful review, thoughtful consideration, and for increasing your score. We hope the paper is published and, if it is, look forward to sharing it with the NeurIPS community. Title: Thank you
Summary: This paper provides Thought Cloning (TC), an imitation learning method that clones not only behaviors but also thoughts. Here, thoughts are descriptive texts for each behaviors. The basic idea is that language can help agents to better plan their actions and adapt to a new environment. More specifically, the TC agent consists of two components: the Upper-Level and Lower-Level Components. The Upper-Level Component learns to generate a thought conditioned on an observation, a mission, and a history of thoughts. The Lower-Level Component learns to generate an action conditioned on an observation, a mission, and the generated thought. This paper demonstrated the effectiveness of TC on BabyAI, since synthetic thoughts can be easily generated on BabyAI. The experiments showed that TC can be trained much faster than Behavioral Cloning (BC), and TC can better generalize to out-of-distribution environment than BC. Strengths: This paper has some strong points as follows. - First of all, this paper is very well written and organized. - The idea of generating actions conditioned on thoughts (descriptive texts for actions) is simple but effective. - Thought Cloning (TC) seems have some advantages compared to Behavioral Cloning (BC). First, TC is trained faster than BC. Second, TC may improve the ability to generalize to out-of-distribution environments. Third, TC also enhance the interpretability, since human can see descriptive texts for each actions. Weaknesses: This paper has some weak points as follows. - One of my main concerns is how can we effectively collect thoughts for actions to train TC agents. For the purpose of demonstration, this paper trained the TC agent on BabyAI where thoughts for actions are synthetically generated. However, in real world scenarios, it is not easy to collect (high quality) thoughts for actions. Even though the authors mentioned that Youtube videos and vision-language models (VLM) can be used for collecting thoughts for actions, such thoughts would be very noisy compared to synthetic thoughts on BabyAI. Accordingly, the training efficiency and the performance may be deteriorated. - This paper implemented the TC agent mainly based on LSTM. I am not sure that LSTM can effectively generate thoughts conditions on a history of thoughts. It would be interesting to show the performance of a Transformer-based TC agent. - The main experiments of this paper was performed on only one environment, BabyAI. It would be better to provide experiment results on other environments. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Q1. How can we effectively collect thoughts for actions to train TC agents? Q2. How much robust is the TC agent with regard to the quality of training thoughts data? What is the effect, if we control the quality of synthetic thoughts on BabyAI? Q3. Is LSTM proper as a base model for TC agents? If we use Transformers instead of LSTMs, is the performance improved or not? Q4. How well does the TC agent perform on other environments in addition to BabyAI? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: - The effectiveness of TC agents seems to be highly dependent on the quality of thoughts for actions. It would be better to provide a way of efficiently collecting high-quality thought data. Also, it would be interesting to show how much robust the TC agent with regard to the level of quality of thoughts. - It would be better to provide more results on other environments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and feedback on our manuscript. We are delighted that you consider our idea simple yet effective, and recognize our contribution to the capabilities and interpretability of agents. Below we address each of your concerns and questions. ***“How can we effectively collect thoughts for actions to train TC agents?”*** Thank you for your comments. Please see the general reviewer response, where this is addressed. ***“Even though the authors mentioned that Youtube videos and vision-language models (VLM) can be used for collecting thoughts for actions, such thoughts would be very noisy compared to synthetic thoughts on BabyAI.”*** We recognize the noise in internet data and that it potentially could be an issue. However, we believe this challenge could be effectively mitigated with proper data processing. An example from MineCLIP [7] shows that careful heuristic-based processing of the captions could make data clean enough to train multi-modal models (e.g. a simple heuristic focusing on domain-specific vocabulary could remove most off-topic text like “welcome to my channel”). More promisingly, one could use language models to filter out off-topic data (they are quite capable and will only get better), either by removing any off-topic comments or determining that some videos should be excluded because not enough of the commentary is helpful. Additionally, even if noise is present, recent ML history shows that–at scale–such “noise” does not prevent learning the “signal”: examples of this occurring on internet-scale data include GPT, CLIP, and VPT (all of which one might have argued ahead of time would not work due to training on noisy data, but in fact work very well despite noise). We view these as fascinating topics of future research inspired by our paper and will say so in the revised paper. ***“How much robust is the TC agent with regard to the quality of training thoughts data? What is the effect, if we control the quality of synthetic thoughts on BabyAI?”*** We added noise to our training data and found TC is robust to it (in fact, it helps because it trains the agent how to get back on track if it makes a mistake in its thinking). Currently, about 3.5% of data is noisy data. If you like, we would be happy to do an experimental sweep of different fractions of noise. Given how long it takes to perform TC runs, it was not possible to do that sweep during the short amount of time in the rebuttal window. If you request it, we could also do tests with other types of noise, including adding phrases like “please subscribe to my channel” and additional tests using LLMs to filter out such off-topic phrases, but we believe such a substantial additional set of experiments is better left to future work where there is space to properly document it.. ***“Is LSTM proper as a base model for TC agents? If we use Transformers instead of LSTMs, is the performance improved or not?”*** We recognize the superiority of the Transformers architecture in recent work. If it comes to real-world scenarios with large-scale language or multimodal tasks, a Transformer would likely be a more powerful model and would be a natural choice. However, in our test domain (BabyAI), it is still a middle-scale problem compared to the internet-scale dataset where Transformers truly shine. Thus, the data-hungry nature of Transformers could make it not the most powerful architecture, or at least not necessary, in BabyAI. One example is from Think Before You Act (Mezghani et al. 2023), where they use GPT-2 like Transformers in the BabyAI environment, but their performance (85.2% Success Rate on BossLevel) is outperformed by BabyAI1.1 (Hui et al. 2020), an LSTM-based architecture baseline (90.4% Success Rate on BossLevel), which has fewer parameters than Think Before You Act. We will mention the value of testing with transformers in future work. ***“How well does the TC agent perform on other environments in addition to BabyAI?”*** We think this is a great direction for future work. However, we believe the best domain to do the science for this paper introducing the method was BabyAI. It is in fact a quite challenging domain, where there are complicated language-described tasks and long-horizon action controls, but it also has other key attributes, such as the ability to generate synthetic thought data. We believe the core principles of Thought Cloning hold promise in more challenging domains, and that much interesting future work exists to see how it performs in such domains. Thank you once again for your review and positive comments. Your score is already high and we appreciate that, but we wonder if you feel the paper is stronger having read our responses if you might consider increasing it further to help its chance of being published and shared with the ML community. We deeply thank you for your consideration. --- Rebuttal Comment 1.1: Comment: Thank you for providing a thoughtful author response. I have carefully read the response. I think that my major concern (e.g., collecting high-quality thoughts for actions to train TC agents) has been largely addressed. Even though TC was evaluated only on BabiAI, I think that this paper is a compelling early work that can promote interesting follow-up research in the community. Therefore, it would be beneficial for this paper to be published in NeurIPS.
Summary: The paper studies the problem of imitation learning and attempt to improve existing IL algorithms by training the agent to think like the expert that is being imitated. Authors propose Thought Cloning - an extension to behavior cloning that seeks to imitate thoughts expressed in natural language. They evaluate the proposed method on thought-augmented data from solving tasks in the BabyAI gridworld. Authors compare the proposed algorithm to standard behavior cloning and observe faster training and better generalization. The paper also contains extended discussion about the benefits of thought cloning with respect to AI safety. Strengths: 1. The Thought Cloning method appears easonable and exploits the paper's high level idea in a natural / intuitive fashion. 2. The paper contains a reasonable evaluation of the proposed idea by augmenting popular environments with expert's "thoughts". Typical caveats are addressed with a reasonable a reasonable ablation analysis. 3. The paper is well-written and easy to follow. The general presentation quality is high. Weaknesses: ### Concern 1: lack of comparison with prior work There are several previous papers that also attempt to teach an agent to think using natural langauge. Authors address planning in natural langauge in section 4.1, but there is a line of work that is not covered by planning. From a surface-level analysis of existing work, there are papers that * Improve reinforcement learning by performing self-feedback - Madaan et al, Self-Refine: Iterative Refinement with Self-Feedback * Ask a language model to reflect on it's performance in natural language, improving learning - Shin et al, Reflexion: an autonomous agent with dynamic memory and self-reflection * Produce natural langauge explanation - parts of Wang et al, Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents Since this is only a surface-level analysis by a non-expert (me), there could be more related works that study this problem. I would argue that the paper could benefit greatly from comparing against all related works. This should include both a theoretical comparison of the proposed approach (e.g. which parts of TC are novel), and an empirical evaluation of how this translates to training speed, generalization, etc. ### Concern 2: Safety Implications of Thought Cloning One of the two main rationales from the introductory section is that thought cloning could make the resulting agent more safe. In S 3.5 / Fig.5, authors evaluate a thought-based safety technique that they call "precrime intervention" and find that: > remarkably, Precrime Intervention almost entirely eliminates all unsafe behaviors, thereby demonstrating the promising potential of TC agents in advancing AI safety While this sounds intuitively correct, there seems to be (at least) two caveats that could result in a less safe system due to the false sense of safety. First, under the formulation proposed in Section 2, Thought Cloning is incentivized to copy any cognitive errors from the expert, including lies, false rationalizations and general ignorance. Using the car driver example from L48-49, an agent could (falsely) argue that it is legal to ignore some traffic rule in this specific circumstance. It is easy to imagine that human "expert" drivers will avoid "testifying against themselves" without a dedicated system to control for that. A sufficiently powerful language model can mimic their plausible explanations to mislead the passenger. Second, if the proposed system is improved upon -- either directly with Reinforcement Learning or indirectly with manual updates, the agent would be incentivised to produce explanations that satisfy the user, so as to avoid being stopped. Note that these explanations do not have to be true - they simply have to be convincing. This can be seen as a special case of the "stop button problem" (see, e.g. , https://arxiv.org/pdf/1611.08219.pdf or https://www.lesswrong.com/posts/wxbMsGgdHEgZ65Zyi/stop-button-towards-a-causal-solution ) In other words, the initial system has a risk of inheriting potential flaws of (human) experts and any improvements to the system run a risk of incentivising less truthful explanations. From my (limited) point of view, it appears to early to claim that TC precrime intervention can truly increase the real-world system safety. One way to improve this would be to add a dedicated safety analysis, formulate and prove safety guarantees. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: > we can improve AI agents by training them to think like humans do. Disclaimer: i am no expert on the subject matter, feel free to ignore the question. In the beginning of the paper, (L5-L6) it is presented a fact that humans think in a language. To the best of my knowledge, there is no scientific consensus that all humans think in a language, or that humans think only in language - though I am no an expert on that account. Is this really true? If so, please cite the relevant work (e.g. in S1). If not, please paraphrase the sentence to avoid accidentally misleading the reader. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 1 poor Limitations: Authors have adequately adressed most of the limitations of their work. As for the sociental impact, I have highlighted several safety concerns for the proposed system in the "weakness" section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review, and for noting our method is reasonable and supported by our evaluations and arguments. We sincerely believe our paper presents a significant and beneficial contribution that would enrich the ML community upon publication. Reflecting upon your concerns, we believe they either have been fixed, or are areas for future work rather than fundamental flaws that should prevent publication. Overall we would have predicted a higher score based just on your comments alone. Your current score recommends not publishing this work, but we hope you will keep an open mind to reconsidering in light of our revisions, replies, and since two other reviewers rated it as 7 (accept) and 8 (strong accept). We have attempted to address each of your concerns, and significantly improved the manuscript as a result. **1. Concern about the lack of comparison with prior work** ***“There are several previous papers that also attempt to teach an agent to think using natural langauge. Authors address planning in natural langauge in section 4.1, but there is a line of work that is not covered by planning.”*** We will add the following text in the related works section to address this: “There are many works that use LLMs to improve the ability of agents to act [Wang 2023], but their approaches are quite different. They generate thoughts in addition to actions [Yao et al. 2022] or let LLMs replan based on new information [Madaan et al. 2023, Shinn et al. 2023], but none directly learn to think while acting by imitating human thought data. Thus, unlike Thought Cloning, they do not benefit from learning from human thought demonstrations how to do things like plan, replan, create high-level goals and the subgoals required to achieve them, and the many other benefits of thinking intelligently during acting.” Please note that the area and the work you mentioned “Wang et al, Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents” is covered in L308. ***“I would argue that the paper could benefit greatly from comparing against all related works. This should include both a theoretical comparison of the proposed approach (e.g. which parts of TC are novel), and an empirical evaluation of how this translates to training speed, generalization, etc.”*** We compared TC to what we feel is the closest baseline (behavioral cloning). As just discussed, there are other works that may seem related (since they use LLMs to think in some ways), but they are actually quite different, and direct comparisons are thus apples to oranges and not very scientifically informative. However, we did add a performance comparison to one similar related work (Think Before You Act, Mezghani et al., 2023), which also learns from action + language data in the same domain (BabyAI). Thought Cloning outperforms it, showing that the main idea of Thought Cloning (imitating human thinking) is beneficial. (See Table 1 in new results PDF) **2. Concern about the Safety Implications of Thought Cloning** ***“under the formulation proposed in Section 2, Thought Cloning is incentivized to copy any cognitive errors from the expert, including lies, false rationalizations and general ignorance.”*** Thank you for highlighting this potential concern. We added the following text as a result: “As occurs with LLM pretraining and other forms of Behavioral Cloning, Thought Cloning could inadvertently inherit undesirable human flaws, such as speaking falsehoods, providing false yet persuasive rationalizations, or being biased. Alignment techniques are being constantly improved to address these challenges [Ouyang et al. 2022]. However, even improving AI agent safety up to the level of a (flawed) human would be a major advance, even if the resulting system is not perfect. Additionally, a distinguishing feature of Thought Cloning is it provides the ability to interpret and prevent these flaws from culminating into actions, making TC a more favorable approach in this regard.” ***“if the proposed system is improved upon -- either directly with Reinforcement Learning or indirectly with manual updates, the agent would be incentivised to produce explanations that satisfy the user, so as to avoid being stopped. Note that these explanations do not have to be true - they simply have to be convincing.”*** We agree, but that is a reason to avoid training the system to not be stopped by precrime intervention. One should instead train agents to do desirable things and avoid undesirable things directly, and reserve precrime interruption as a safety mechanism (and not training reward function). Additionally, one could use ever-smarter LLMs (or, better, multi-modal models like vision-language models) to monitor the thoughts and actions to attempt to detect such issues, a capability that is only possible because Thought Cloning provides interpretable access to the agent's thoughts. We will add these nice insights to the paper. Thanks! ***“In other words, the initial system has a risk of inheriting potential flaws of (human) experts and any improvements to the system run a risk of incentivising less truthful explanations. From my (limited) point of view, it appears to early to claim that TC precrime intervention can truly increase the real-world system safety. One way to improve this would be to add a dedicated safety analysis, formulate and prove safety guarantees.”*** Our paper contains arguments for and experimental analyses on the effectiveness of precrime intervention, which can serve to inspire others to advance this method and or test it in real-world scenarios. We’ll ensure the writing does not guarantee it will be effective, but instead (like much helpful scientific publications) simply provides evidence of a new, promising method. **3. It is debatable whether humans exclusively think in language.** Addressed in the Response to All Reviewers. --- Rebuttal 2: Title: Reviewer should reconsider their score Comment: I feel that the score given in this review is excessively low given the content of the review. Even granting that there are thorny issues with the safety applications, I don't think the reviewer has brought up any fatal criticism that indicate "technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations." Moreover, the authors have responded cogently to the comments. Therefore, I encourage this reviewer to raise their score, or otherwise indicate why their concerns have not been addressed.
Summary: This paper presents an approach incorporates discrete intermediate-level descriptions and goals to train RL agents to perform higher level actions. These intermediate-level descriptions and goals are described in natural language, and the authors refer to them "thoughts" that the RL agent learns which then influences a desired action, in a process called Thought Cloning (TC). The paper contrasts this approach with conventional Imitation Learning that rely on Behavioral Cloning (BC). The paper claims that TC learns faster than BC and generalizes much better than BC across cognitive difficulty and behavioral difficulty. Strengths: Originality: This paper appears to propose a novel method, but I am not certain of that. Quality: The collected experimental data and the collection of synthetic training data are are all sound, and several valid metrics are provided to support the papers claim, and this is a complete, but, I believe, flawed work due to the construction of the ablation study. Significance: If the ablation study can be properly addressed and modified to show a clear advantage of TC over BC, then this paper would be very significant. Weaknesses: Writing: Prose used in the abstract and introduction is superfluous at times. Many sections speculate why the proposed method is superior supported by biological comparison, but the paper lacks evidence for any such comparison. The introduction could possibly be halved in length and still convey the most important aspects of this paper. Specifically, lines 47-81 could easily be condensed. Quality: The ablation study performed, as I understand it, is the weakest component of this paper. The implementation of TC w/o Imitating Thought is described opaquely, and leaves a lot of inference to the reader. As I understand it from the paper, and copy of the TC architecture is made, however the control signals for Next Thought and Thought History are left null. While this does create a model with an equivalent number of parameters as the original TC, only the parameters for the action agent are trained. Clarity: See above issues with interpreting the ablation study. Experiment Design: Comparing BC to TC is an imperfect comparison as noted by the paper and hence the inclusion of the TC w/o Imitating Thought, however, this model does not adequately address difference between model performance either. My intuition tells me that TC significantly outperforms BC because it has access to more training data (the intermediate thoughts) and comprises of more parameters which then allows the model to train faster and generalize better. The paper does not do an adequate job to address this concern. A possible new experiment that could address this is to ensure training data is equivalent between the models, and allow the BC model to approximate the same number of parameters as TC. Without this additional comparison, it cannot be confidently known where the true cause for the improvement. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: What is the architecture of the TC w/o imitating Thought and what are its control signals? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: A major limitation to this approach mentioned in the introduction is the availability of applicable thought data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review and acknowledging the strengths of our work, including its novelty and saying it is ***“very significant”*** (provided your requested experiments confirm TC outperforms BC, which they all did!). We have addressed each of your concerns, significantly improving the manuscript as a result. We feel this paper makes an important contribution the ML community will benefit from if published. Your current score is low and likely will prevent publication. We hope you will keep an open mind to changing your score in light of our improvements, and since two other reviewers rated the paper a 7 (accept) and 8 (strong accept). ***“The implementation of TC w/o Imitating Thought is described opaquely ... As I understand it …[a] copy of the TC architecture is made, however the control signals for Next Thought and Thought History are left null. While this does create a model with an equivalent number of parameters as the original TC, only the parameters for the action agent are trained.”*** All parameters were trained in the 'TC w/o Imitating Thought' control (described in L158-162). We apologize it was not clearer. We will add the following to make it more clear. “we introduce an ablation variant called TC w/o Imitating Thought to demonstrate that TC’s superiority is not solely due to its larger number of parameters. This variant’s architecture is identical to the TC architecture, except for a minor architectural difference where the latent vector from the upper level is directly input to the lower level as thought. This adjustment is necessary because this variant is not trained on the Thought Cloning loss, so we do not have per-word supervision. To train these parameters, we thus need to train them based on how these “thoughts” contribute to the final performance (i.e. they are trained end-to-end). If we sampled words (as in Thought Cloning), we could not train these parameters end-to-end because hard sampling of words is non-differentiable, so gradients could not flow backward through this operation. Thus, we make one small change in order to allow the parameters to be trained via SGD, which is to pass the logits of the Upper-level Component directly into the Lower-level Component, which is a differentiable operation. We feel this is the closest and fairest control possible to Thought Cloning, allowing virtually the same architecture and the same number of parameters, but not including the Thought Cloning innovation of exploiting human thought data.” ***"My intuition tells me that TC significantly outperforms BC because it has access to more training data (the intermediate thoughts) and comprises of more parameters which then allows the model to train faster and generalize better."*** We designed the TC w/o Imitating Thought control to address this concern, allowing the baseline to have the same number of parameters (and architecture) as Thought Cloning. We hope now that you know these parameters were trained, your main objection to the paper is removed. However, for completeness, we try to address it in another way. Instead of holding the architecture the same, we create a BC control with a more canonical BC architecture, but with the same number of parameters as TC. Results show it does not perform nearly as well as TC. (see new results PDF) To address if “has access to more training data (the intermediate thoughts)”, we will add this text: “One might argue that TC gets more data (one set of actions and one set of words per episode), and thus that a proper BC control is to give BC 2x data. A counter is that such a control is unnecessary because noticing and harnessing this additional (often free) and heretofore ignored data stream is a central contribution of this paper, so showing that using this data improves things is the main comparison to be made. However, for completeness, we ran an experiment giving BC twice as much data: results show BC-2xData is still far slower to learn and has significantly worse performance at convergence.” (See new results PDF) Since you said TC’s performance advantage might come from having extra parameters or more data, we wanted to make sure it did not come from having both. We tried BC with 2x data and a pure BC architecture with the same number of parameters as TC, and it too underperformed TC. Also, a prior work “Think Before You Act [Mezghani et al. 2023]” had a similar number of episodes of actions and language data in the same domain, and their model has more parameters than ours. TC also outperforms them. (See new results PDF) You wrote “If the ablation study can be properly addressed and modified to show a clear advantage of TC over BC, then this paper would be very significant.” The original control, a new comparison to prior work, and all of the new experiments you requested do indeed show a clear advantage of TC over BC (dramatically faster learning and better performance at convergence), providing strong evidence that the advantages of TC are not solely due to additional training data or model capacity. Given that you said that would make this paper “very significant” were this true, we hope you are open to substantially increasing your score. Thank you for considering it! ***"Many sections speculate why the proposed method is superior supported by biological comparison, but the paper lacks evidence for any such comparison."*** Thank you for pointing this out. We will be happy to revise the writing. While we used human thinking as a source of inspiration, and we do think it suggests this direction is powerful, one could jettison that analogy and our method would stand alone on its empirical results (higher performance, debugability, safety benefits, etc.). We will eliminate superfluous language and make these issues clearer in the paper. ***“A major limitation to this approach mentioned in the introduction is the availability of applicable thought data.”*** Addressed in the Response to All Reviewers. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns completely. I believe the paper should be accepted, and have changed my score to reflect such. The various control architectures matching number of parameters and amount of data leaves me with confidence that the TC architecture is the cause of the improvement. One additional request (that I feel would further strengthen the paper) could the authors address why "TC w/o Imitating Thought" continues to achieve a much lower success rate in comparison to the various control architectures? My initial reaction is to say that architecture overall is deficient for a true comparison. I understand that this is the "closest and fairest control possible," (and how I might have constructed such a control) but it's puzzling why "TC w/o Imitating Thought" never achieved similar success rates, when every other control model did.
Rebuttal 1: Rebuttal: We are deeply grateful to the reviewers for their comprehensive evaluations and thoughtful feedback on our work. We are encouraged by the reviewers' positive comments including: - “The approach is innovative, and could be **highly impactful** once it is scaled up.” (HBeY) “The idea of generating actions conditioned on thoughts (descriptive texts for actions) is simple but effective.” (9iZD) - “If the ablation study can be properly addressed and modified to show a clear advantage of TC over BC, then **this paper would be very significant**.” (gnSR) [The new, requested ablation studies do show such a clear advantage] - “The paper makes **a strong empirical case** that their approach works” (HBeY) “The paper contains a reasonable evaluation of the proposed idea… Typical caveats are addressed with a reasonable ablation analysis.” (QGuE) “The collected experimental data and the collection of synthetic training data are all sound.” (gnSR) - “The application to AI safety and interpretability is interesting and increases the significance of the contribution.” (HBeY) “TC also enhance the interpretability, since human can see descriptive texts for each actions.” (9iZD) - “TC is trained faster than BC… TC may improve the ability to generalize to out-of-distribution environments” (9iZD) - “The paper is well-written and easy to follow.” (QGuE) “This paper is very well written and organized” (9iZD) We feel this paper makes an important, helpful contribution that the ML community will benefit from if published. **We appreciate that two reviewers rated the paper a 7 (accept) and 8 (strong accept).** But the two other scores are low and likely will prevent publication. We hope all reviewers will keep an open mind to increasing your scores in light of our improvements to improve the chances that the ML community will be able to benefit from learning about this work In response to your requests, we have produced many new results (see results PDF), all of which strongly empirically support the benefits of Thought Cloning. We also improved the paper in the ways you suggested. Here is a quick summary of the major revisions: - We clarify that all parameters were in fact trained in the “Thought Cloning w/o Imitating Thought” control, and thus there was already a behavioral cloning control for the number of parameters (and architecture), eliminating one reviewer’s major objection. While this was described in the original paper, we have modified the paper to make the description of this control much clearer. For completeness, we also performed additional experiments requested by reviewer gnSR, all of which show that the superiority of TC does not come from a larger number of parameters or from (arguably) having more training data. Instead, it is the main idea of Thought Cloning that drives the performance gains. - We modified the language in the paper’s motivation making it clear that it is simply a hypothesis (and one up for debate) that humans think in natural language (and have added citations that the claim is debatable). We also point out that this biological comparison is simply motivation, and that the method can independently stand on its empirical results even if one does not believe in this hypothesis or analogy. - We addressed the safety concern raised by reviewer QGuE. - We expanded and improve comparisons to prior work. - We've expanded the explanation for Eq. 1 and explained the error bars more clearly in the corresponding figure captions. The revisions have made the paper stronger and we deeply thank all the reviewers for their help. Below we respond to common questions, and the reviewer-specific responses answer unique comments. **1. Regarding our motivation text saying humans think in natural language:** Reviewer QGuE, HBeY: **It is debatable whether humans exclusively think in language.** Thanks for mentioning this. While (as you noted) this issue is not essential for judging whether Thought Cloning is a valuable method (since it performs well), the claim is a central inspiration for the work. To recognize the claim is debated, we will update the text as follows: “While it remains debated whether humans exclusively think in language [Grandchamp et al. 2019, Alderson-Day et al. 2015], some argue that natural language is intricately woven into our thought processes [Premack 2004, Chomsky 2006], conferring distinct cognitive advantages. [Pinker 2003]” **2. Concern about the availability of thought data:** Reviewer 9iZD: ***“How can we effectively collect thoughts for actions to train TC agents?”*** Reviewer GnSR: ***“A major limitation to this approach mentioned in the introduction is the availability of applicable thought data.”*** We apologize for the confusion. It appears there may have been a misinterpretation regarding our stance on the availability of thought data. Our introduction intended to convey the opposite: that thought data is widely available, but it's an avenue that hasn't been sufficiently explored by the community. As an example, we could collect action demonstrations from unlabeled Youtube videos with VPT [Baker et al. 2022, Fan et al. 2022] and pair them with related video transcripts (the closed captions of the video where people narrate what they are doing and why while acting). This gives us a dataset encompassing both human cognition and action [Fan et al. 2022]. We proposed this approach in L62-66 and L91-96 in our paper. A contribution of this paper is introducing a method that can mine that heretofore under-exploited treasure trove of data. There are also other (albeit more expensive) ways to get data, such as human volunteers, contractors, employees, or vision-language models that create such narrations for videos. Pdf: /pdf/feaf9f467d2ef5fa77bd4211defabd846a51b644.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Injecting Multimodal Information into Rigid Protein Docking via Bi-level Optimization
Accept (poster)
Summary: The paper proposes a new framework for rigid protein docking called BiDock. BiDock is based on an Evoformer-based model that predicts a distance matrix and a gradient-based optimization of the optimal roto-translation based on the predicted distance matrix. In order to be able to pass gradients through the optimization steps at training time the authors derive a bi-level optimization procedure and a spectral initialization. The model is trained specifically for rigid docking of antibody chains and evaluated on a range of complexes divided in: general proteins, antibody heavy and light chains and antibody-antigens. Across all three categories, the method shows improved performances. Strengths: The paper proposes an interesting new approach to deep-learning-based rigid docking using gradient based pose optimization and propagation through the optimization. Moreover, the spectral initialization provides an interesting and meaningful improvement to the performance. Weaknesses: The paper has good methodological and empirical contributions, however, to make it a strong accept I believe a few components in the manuscript should be improved and/or clarified. Introduction and novelty claims: 1. Personally, I do not understand the strong emphasis that the authors put on “being the first to effectively leverage sequence and structure modal information in rigid protein docking”. This is I believe the least significant part of the paper as: (1) AlphaFold-Multimer already uses sequence and structural template based informations and the architecture proposed is very similar to that of the EvoFormer blocks, (2) adding sequence based information to EquiDock would be a very straightforward addition, (3) other existing deep-learning based docking methods not referenced by the authors such as [1] already uses sequence and structure informations for the predictions. I believe that the claims of novelty in these regards should be adjusted taking into consideration (1-3). Unclear parts of the method: 2. The loss used to train the model is not clear to me. Equation (12) says that L^out(R, t, \phi) = \lambda_1 L_dist + \lambda_2 L_msa, however neither L_dist nor L_msa seem to depend on R nor t. Is this correct? If so I don’t understand the point of the bi-level optimization, as the gradient of L^out with respect to \phi would not depend on R* or t*. 3. line 167 “\hat{D} can be obtained by using the center of each bin”, what is meant specifically by center of each bin? Center of the bin with highest likelihood, or median of the predicted binned distribution, or something else? There are several missing details (both in paper and in appendix) about the evaluation set-up that should be clarified: 4. How was AlphaFold-Multimer run (and with what set-up in terms of MSA etc)? The appendix links to the OpenFold repo but this does not yet provide Multimer’s replicas. 5. For the other methods, what structures were used as input? I know DB5.5 provides also apo (unbound) structures that I assume the authors used for the input, however for the two benchmarks they propose how are the input structures generated? 6. I assume the different methods were run on different hardware (GPUs vs CPUs), this should be reported in Table 3 or standardized. [1] Ketata, Mohamed Amine, et al. "DiffDock-PP: Rigid Protein-Protein Docking with Diffusion Models." *arXiv preprint* (2023). Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses section and the following: 7. Especially with the flexible regions in antibodies it appears important to consider a more flexible form of binding, could the proposed framework be extended in that direction? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See weaknesses section Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - We sincerely thank the Reviewer for all the comments, and it is a great honor for us to inspire your interest. We have addressed all your questions below and hope they have clarified all confusion you had about our work. 1. > I do not understand the strong emphasis that the authors put on “being the first to effectively leverage sequence and structure modal information in rigid protein docking”. There are some related works, such as AlphaFold-Multimer, EquiDock, and DiffDock-PP. **Answer:** Thanks for your insightful question. (1) AlphaFold-Multimer targets flexible docking, which makes it different from our scope. The main reason is that AlphaFold-Multimer lacks a specific design that allows the direct input of 3D rigid bodies to the structure module for rigid docking. (2) The equivariant GNNs in EquiDock require the feature of each amino acid to be a vector. However, MSA features are tensors with dimensions $F^{msa}\in\mathbb{R}^{N_{cls}\times N_{res}\times 49}$. Thus there is no straightforward extension. (3) Regarding DiffDock-PP, due to space constraints, please refer to the rebuttal provided for Reviewer XfrH's Q1. We apologize for any inconvenience this may have caused you. In the revised version, we will include these methods in the "Related Work" section, highlighting their differences and reporting additional results in the experiments. 2. > The loss used to train the model is not clear to me. Equation (12) says that L^out(R, t, \phi) = \lambda_1 L_dist + \lambda_2 L_msa, however neither L_dist nor L_msa seem to depend on R nor t. **Answer:** Apologies for the confusion. The outer and inner loops share the common goal of learning the optimal roto-translation transformation for rigid docking. Therefore, the inner loss $L^{in}$ in Eq. (1) is also the objective function of the outer loop. To clarify this, we will explicitly include the inner loss $L^{in}$ in Eq. (12) as: $L^{out}=\lambda_1 L_{dist}+\lambda_2 L_{msa}+\lambda_3 L^{in}$ Thanks for your careful review, and we will fix this error in the revised version. 3. > Line 167 “\hat{D} can be obtained by using the center of each bin”, what is meant specifically by center of each bin? **Answer:** Apologies for the confusion. $\hat{D}$ is obtained by using the mean of each bin. The term "the center of each bin" was not mathematically precise, and we will clarify this point in the revised version. Thanks for your careful review and valuable feedback. 4. > How was AlphaFold-Multimer run (and with what set-up in terms of MSA etc)? **Answer:** Apologies for any confusion. We reproduce AlphaFold-Multimer based on AlphaFold2 released by OpenFold and evaluate the reproduced version by converting the official JAX checkpoint (https://github.com/deepmind/alphafold) into PyTorch. The setup of MSA features is the same as those in BiDock, as detailed in Appendix B. Additionally, we feed the rigid bodies of each chain as templates to AlphaFold-Multimer. Thanks for your careful review. We will include additional information on the experimental setup of AlphaFold-Multimer in the revised version. Your feedback is much appreciated. 5. > For the other methods, what structures were used as input? **Answer:** Thanks for your comments. (1) It's worth noting that baselines and our proposed BiDock use the same input. (2) For DB5.5, we utilize the unbound structures (ground truth) provided by the benchmark. Detailed information on the antibody-antigen benchmark is explained in Appendix A. Specifically, obtaining the ground truth structures of antibody-antigen complexes poses significant challenges in practical applications. Researchers often rely on existing folding models to predict them. To simulate real-world scenarios, we use a specialized antibody model called xTrimoABFold [2] to predict the conformations of antibodies and AlphaFold2 for antigens. Furthermore, our dataset has been selected as a benchmark for the antibody-antigen complex structure prediction task on the "Life Science Leaderboard" (https://www.biomap.com/sota/), facilitating public access and fair comparisons. 6. > I assume the different methods were run on different hardware (GPUs vs CPUs), this should be reported in Table 3 or standardized. **Answer:** All experiments for the baselines and our proposed BiDock were conducted on an A100 cluster. The specific environment details are provided in Appendix B as follows: - Operating system: Linux version 5.13.0-30-generic - CPU information: AMD EPYC 7742 64-Core Processor - GPU information: NVIDIA A100-SXM4-80GB Thanks for your suggestion, and we will clarify this point in the revised version to avoid any misunderstandings. 7. > Especially with the flexible regions in antibodies it appears important to consider a more flexible form of binding, could the proposed framework be extended in that direction? **Answer:** Thanks for your insightful question. Rigid docking has practical applications that motivated our focus on this problem. Specifically, (1) rigid docking allows us to input partially known structures, facilitating the solution of challenging docking problems. (2) In certain scenarios, we only need to determine whether a protein exhibits a specific function, focusing on the docking pocket without requiring detailed interface information. In such cases, flexible docking is redundant. (3) Regarding the extension to flexible docking, a well-optimized distance map (cross-modal transformer) and the overall bi-level optimization serve as a good starting point. We can design an inner loop tailored for flexible docking, considering conformational changes, geometric constraints, atom clashes, etc. [1] Ketata M A, Laue C, Mammadov R, et al. DiffDock-PP: Rigid Protein-Protein Docking with Diffusion Models[C]//ICLR 2023-Machine Learning for Drug Discovery workshop. 2023. [2] Wang Y, Gong X, Li S, et al. xtrimoabfold: Improving antibody structure prediction without multiple sequence alignments[J]. 2022. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you very much for carefully responding to my questions and concerns. I personally believe that the bi-level optimization ideas provided in the paper constitute an interesting novel technical idea and perspective (which as I mentioned I think should be the emphasis of the paper). Therefore, even after reading the concerns of the other reviewers, I have raised my score to 7 and recommend acceptance. --- Reply to Comment 1.1.1: Comment: We are truly pleased to have been able to provide satisfactory explanations for the concerns you raised. Your time and attention to our rebuttal are greatly valued, and we are grateful for your willingness to adjust the score accordingly. We will diligently work on polishing the presentation and incorporating the necessary details in the rebuttal into our paper. Your support is invaluable in this process.
Summary: This paper studies the rigid protein-protein docking problem. The authors fuse the sequence features and structural features of proteins and unify these features into single features and pair features as done in AlphaFold2. These features are then fed into an evoformer-like cross-modal transformer to produce updated representations. The updated representations are used to predict inter-chain distance map and masked MSA tokens. The final rigid-docking results (i.e., a global translation and a global rotation) can be obtained by solving an optimization problem that minimizes the discrepancy between the distance map of the predicted complex and that of the ground-truth. The authors claim that they solve the above optimization problem better by providing a better initial rotation and translation with the help of spectral initialization. Strengths: 1. The proposed cross-modal transformer is clear and easy to follow. 2. The spectral initialization of the optimization problem offers good insights for those who are working on the structure prediction problems. Weaknesses: 1. Some important related works are not discussed and compared. For example, (1) Diffdock-PP (https://arxiv.org/pdf/2304.03889.pdf) use torsional diffusions to solve rigid protein docking, and the source code is released. (2) McPartlon & Xu proposed DockGPT for flexible and site-specific protein docking and design. Although they target the flexible docking, they also use evoformer to predict the inter-chain distance map. (3) Very recently, Chu, Lee-Shin, et al. propose geodock (https://www.biorxiv.org/content/10.1101/2023.06.29.547134v1), a multi-track iterative transformer network for fast and flexible 88 protein-protein docking. (4) Finally, Luo et al. proposed xTrimoDock (https://www.biorxiv.org/content/10.1101/2023.02.06.527251v1.full.pdf), which is quite related to this work, as they also predict the inter-chain distance map and compute a global rotation and translation that best-fit the predicted distance map. As a submission of the NeurIPS conference, I highly recommend the authors to discuss the connection of this work with the works mentioned above. 2. Figure 1. of the current manuscript is borrowed from xTrimoDock (https://www.biorxiv.org/content/10.1101/2023.02.06.527251v1.full.pdf), this could potentially be problematic. 3. The experimental results are not that clear and convincing. (1) First, Diffdock-PP is not included as a baseline. (2) Second, previous works use ligand-RMSD, complex-RMSD, or interface-RMSD to evaluate the performance of the model. How RMSD is calculated in this work is not revealed. 4. As shown in Eq. 12, 9, 11, the definition of the L^{out} loss does not involve $R^{\star}$ and $t^{\star}$. Therefore, the first term in Eq. 13 is equal to zero, and it seems that the outer loss can be directly optimized by minimizing the distance prediction loss and MSA loss. I'm not sure how the story of bilevel-optimization holds in this situation. Can the authors further clarify this point? 5. The authors claim that they use bilevel optimization to solve the problem. However, the effectiveness of the bilevel optimization is not justified. The authors are encouraged to report the results of the variant that does not rely on bi-level optimization, i.e., directly training a distance-map predictor, and recovery the complex structures from predicted distance map. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please see the Weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Please see the Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - We sincerely thank the Reviewer for your careful reading. We would like to address the concerns by providing responses as well as additional experimental results. 1. > Some important related works are not discussed and compared. For example, Geodock, DockGPT, xTrimoDock and Diffdock-PP. **Answer:** Thanks for the suggestion. (1) GeoDock [1] and DockGPT [2] target flexible docking. Firstly, flexible docking contradicts the rigid-body assumption [3,4], rendering it incomparable with rigid docking models. Secondly, GeoDock is released after the submission deadline, explaining its absence in the references. (2) Regarding xTrimoDock, NeurIPS permits sharing of early versions on platforms like Biorxiv. (3) Before the submission deadline, issues (https://github.com/ketatam/DiffDock-PP/issues/10) with the released code of DiffDock-PP [5] exist. The authors have resolved them now, so we evaluate the performance of DiffDock-PP shown in the tables below. **Table** Quantitative comparisons between DiffDock-PP and the proposed BiDock on DB5.5 test set. (bold: best) ||RMSD $\downarrow$|TM-score $\uparrow$|DockQ $\uparrow$| | ------------------- | ----- | -------- | ------- | |DiffDock-PP|17.364 $\pm$ 7.262|0.670 $\pm$ 0.118 | 0.031 $\pm$ 0.047| |BiDock|**7.280** $\pm$ 8.117|**0.847** $\pm$ 0.158|**0.564** $\pm$ 0.369| **Table** Quantitative comparisons between DiffDock-PP and the proposed BiDock on VH-VL test set. (bold: best) ||RMSD $\downarrow$|TM-score $\uparrow$|DockQ $\uparrow$| | ------------------- | ----- | -------- | ------- | |DiffDock-PP|9.680 $\pm$ 5.266|0.653 $\pm$ 0.123|0.130 $\pm$ 0.115| |BiDock|**1.242** $\pm$ 0.602|**0.966** $\pm$ 0.021|**0.773** $\pm$ 0.187| **Table** Quantitative comparisons between DiffDock-PP and the proposed BiDock on AB-AG test set. (bold: best) ||RMSD $\downarrow$|TM-score $\uparrow$|DockQ $\uparrow$|maxDockQ $\uparrow$| | ------------------- | ----- | -------- | ------- | ----- | |DiffDock-PP|20.562 $\pm$ 4.237|0.578 $\pm$ 0.072|0.197 $\pm$ 0.036| 0.021 $\pm$ 0.020| |BiDock|**9.707** $\pm$ 8.759|**0.773** $\pm$ 0.187|**0.342** $\pm$ 0.351|**0.414** $\pm$ 0.386| From the results, BiDock generally outperforms DiffDock-PP. One possible reason is that DiffDock-PP directly utilizes MSA embeddings obtained from a pre-trained protein language model. Therefore, the learning of MSA embeddings lacks guidance from rigid structures, and PLMs trained on single chains cannot handle the pairwise contact prediction essential for docking. In contrast, BiDock employs raw MSA features and effectively leverages the rich evolutionary information through a cross-modal transformer, seamlessly integrating it with the structure-modal information. In the revised version, we will include these methods in the "Related Work" section, highlighting their differences and reporting additional results in the experiments. 2. > The experimental results are not that clear. Previous works use ligand-RMSD, complex-RMSD, or interface-RMSD to evaluate the performance of the model. How RMSD is calculated in this work is not revealed. **Answer:** Apologies for the confusion. We used complex-RMSD in our experiments. We will clarify this point in the revised version. 3. > As shown in Eq. (12), (9), (11), the definition of the $L^{out}$ loss does not involve $R^*$ and $t^*$. Therefore, the first term in Eq. (13) is equal to zero, and it seems that the outer loss can be directly optimized by minimizing the distance prediction loss and MSA loss. **Answer:** Apologies for the confusion. The outer and inner loops share the common goal of learning the optimal roto-translation transformation for rigid docking. Therefore, the inner loss $L^{in}$ in Eq. (1) is also the objective function of the outer loop. To clarify this, we will explicitly include the inner loss $L^{in}$ in Eq. (12) as: $L^{out}=\lambda_1 L_{dist}+\lambda_2 L_{msa}+\lambda_3 L^{in}$ Thanks for your careful review, and we will fix this error in the revised version. 4. > The effectiveness of the bi-level optimization is not justified. **Answer:** Thanks for the suggestion. We conduct an ablation study using two-stage optimization: training outer and inner loops separately. Specifically, we use cross-entropy and masked MSA losses (Eq. (9) and Eq. (11)) to train cross-modal transformer. With the resulting distance map, we compute roto-translation transformation using inner loss and spectral initialization. The results on antibody-antigen docking are presented in the table below, with "w/o Bi" denoting the variant without bi-level optimization. **Table** Ablation studies on bi-level optimization. (bold: best) ||RMSD $\downarrow$|TM-score $\uparrow$|DockQ $\uparrow$|maxDockQ $\uparrow$| | ------------------- | ----- | -------- | ------- | ----- | | BiDock w/o Bi |10.090 $\pm$ 7.817|0.733 $\pm$ 0.173|0.220 $\pm$ 0.232|0.307 $\pm$ 0.365| |BiDock|**9.707** $\pm$ 8.759|**0.773** $\pm$ 0.187|**0.342** $\pm$ 0.351|**0.414** $\pm$ 0.386| Results support our contributions of using bi-level optimization for end-to-end optimization, where the parameter learning of the cross-modal transformer is tailored for rigid docking. In the revised version, we will incorporate these ablation studies. Your valuable feedback is highly appreciated. [1] Chu L S, Ruffolo J A, Harmalkar A, et al. Flexible Protein-Protein Docking with a Multi-Track Iterative Transformer[J]. bioRxiv, 2023: 2023.06. 29.547134. [2] McPartlon M, Xu J. Deep Learning for Flexible and Site-Specific Protein Docking and Design[J]. bioRxiv, 2023: 2023.04. 01.535079. [3] Vakser I A. Protein-protein docking: From interaction to interactome[J]. Biophysical journal, 2014. [4] Desta I T, Porter K A, Xia B, et al. Performance and its limits in rigid body protein-protein docking[J]. Structure, 2020. [5] Ketata M A, Laue C, Mammadov R, et al. DiffDock-PP: Rigid Protein-Protein Docking with Diffusion Models[C]//ICLR Machine Learning for Drug Discovery workshop. 2023. --- Rebuttal Comment 1.1: Title: Thanks for authors' detailed response. Comment: I have read the authors' responses as well as other reviewers' comments and I really appreciate the authors' efforts to address my questions. I now agree with the authors that the comparison with GeoDock and xTrimoDock is not necessary currently. Nevertheless, my biggest concern about the paper, i.e., the formulation of the bilevel optimization, still remains unsolved (We note that Reviewer HJaR also mentions this issue). The authors mention in the replies that the inner loss in Eq. (1) is also the objective function of the outer loop (L_out = L_dist + L_msa + L_in). Such a formulation does not fit the definition of bilevel optimization. Directly adding the loss of the inner loop into the loss of the outer loop does not make much sense. (If this is the case, why don't we just simply optimize the whole outer loop and ignore anything about the inner loop.) The bilevel optimization is a two-step optimization process where one optimization problem (inner loop) is nested within another(outer loop). The solution of the outer loop (in this case, the learned Cross-modal Transformer which predicts the distance map) directly affects the solution of the inner loop (the optimal transformation given fixed distance map). Conversely, the solutions of the inner loop (optimal transformation given fixed distance map) should affect the optimal solution of the outer loop. As also mentioned by Reviewer HJaR, neither L_dist nor L_msa seem to depend on R nor t, which means that the solution of the inner loop does not affect the outer loop. Even if you add the loss of the inner loop into the loss of the outer loop, anything happening in the inner loop won't affect the learning of the cross-model transformation, unless you define some loss in the outer loop based on the solution of the inner loop. To make my point even more clear, let's imagine a hypothetical scenario. Given the solution of the inner loop (R, t), you can calculate the final docked pose of the ligand, and the whole process is differentiable (which means you can accumulate the hyper gradient of the inner process, i.e., any loss defined on the final docking pose is differentiable to the parameters of the outer loop). Let's say you define some structural loss in the outer loop (e.g., RMSD between the docked ligand pose, and the ground truth ligand pose), then the outer loop will depend on R and t, and this problem can be considered as a bi-level optimization problem. Note that you should keep track of all hyper gradients of the inner loop so that you can update the parameters of the cross-model transformer. I'm open to any further discussion in terms of the bilevel optimization formation of this work. I personally believe the whole story of the bilevel optimization in this work is a little bit misleading for the community and thus vote for the rejection. If the authors can further clarify this point and truly fit the methodology into the framework of bilevel optimization, I'm happy to change my opinion. --- Reply to Comment 1.1.1: Comment: We value your attention to our rebuttal and thoughtful considerations, and we apologize for any misunderstandings. Let's revisit the proposed bi-level optimization framework: (1) The outer loop optimizes the cross-modal transformer to integrate multi-modal information for distance map prediction. Its loss functions comprise three parts: distogram loss (Eq. (9)) and masked MSA loss (Eq. (11)), unrelated to the inner loop, and the inner loss, associated with inner loop optimization. (2) Given the predicted distance map $\hat{D}$, the inner loop optimizes the roto-translation transformation $(R,t)$, as in Eq. (1). With optimized transformation $(\hat{R}(\phi),\hat{\mathbf{t}}(\phi))$, inner loss replaces the predicted distance map in Eq.(1) with the ground truth: $\mathcal{L}^{in}(\hat{R},\hat{\mathbf{t}},\phi)=\min_{\phi} \frac{1}{mn}\sum_{i=1}^m \sum_{j=1}^n\left(\left|X_i-\hat{R}(\phi) Y_j-\hat{\mathbf{t}}(\phi)\right|-D_{ij}\right)^2,$ The misunderstanding may stem from this replacement. To clarify, our revised version will distinguish the formulation and loss of the inner loop explicitly. Notably, the optimization of roto-transformation relies on the predicted distance map, allowing hypergradients to flow through the cross-modal transformer. Furthermore, we elaborate on two points: - Why not solely optimize the cross-modal transformer using MSE between predicted and ground truth distance maps and eliminate the inner loop? The inner loop ensures rigidity. - Why not replace the predicted distance map with ground truth in the inner loop, removing the outer loop? Inference lacks ground truth, making accurate distance map prediction critical. In conclusion, these explanations illuminate interdependence and mutual enhancement of outer and inner loops in our bi-level optimization framework. We appreciate your feedback and aim to refine our work to minimize misunderstandings.
Summary: This paper introduces BiDock, a novel approach that integrates sequence- and structure-modal information to improve the accuracy of rigid protein docking predictions. The proposed method uses multimodal information through bi-level optimization, enabling joint optimization of the docking score and the weights of the multimodal features. A gradient descent method is employed to predict the rotation and translation instead of relying on the Kabsch algorithm. Additionally, a spectral initialization is proposed for the gradient descent. The authors demonstrate the effectiveness of BiDock by comparing it to existing docking methods on various datasets, including two newly curated datasets from PDB database. Remarkably, BiDock outperforms other methods, particularly excelling in the challenging antibody-antigen docking task. The paper's contributions include the development of a novel approach for integrating multimodal information in protein docking, the introduction of two curated datasets, a thorough evaluation of the proposed method with baselines across multiple datasets. Strengths: - **Integration of multimodal protein representation**: The paper introduces a novel approach that combines both information representations and structural information, resulting in a comprehensive protein representation. This integration enhances the accuracy of rigid protein docking predictions. - **Bi-level optimization formulation of the protein docking problem**: The authors formulate the docking problem as a bi-level optimization task, which allows for joint optimization of the rigid docking poses and the weights of the multimodal features. By employing a gradient descent method for predicting rotation and translation, the paper offers an alternative to the traditional Kabsch matching algorithm. - **Spectral initialization for gradient descent**: The paper proposes a spectral initialization technique for the gradient descent method used in the bi-level optimization. - **Introduction of two new curated datasets**: The authors curate two new datasets from the Protein Data Bank (PDB) database, specifically focusing on antibodies (VH-VL) and antibody-antigen complexes (AB-AG). These datasets provide valuable resources for evaluating protein docking methods, particularly in the context of antibody-antigen interactions. - **Strong experimental results**: The paper demonstrates the superiority of the BiDock model over existing docking methods through comprehensive experiments on multiple datasets. The proposed method particularly excels in the challenging antibody-antigen docking task, showcasing its efficacy in accurately predicting protein docking conformations. - **Clear presentation**: The authors provide clear explanations of the proposed approach, experimental setup, and evaluation metrics, enhancing the clarity and accessibility of the research findings. Weaknesses: - **Unclear motivation**: The paper does not clearly explain why sequential/coevolution representations are useful specifically for rigid docking problems, leaving the reader questioning the relevance and benefits of these representations in this context. Ablation study between implementation with and without sequential information is appreciated. - **Choice of bi-level optimization**: The paper formulates the problem as a bi-level optimization task, but it lacks a clear explanation as to why this approach is chosen over direct end-to-end optimization using the distance map as input. The division of the optimization into an outer loop (distance map prediction) and inner loop (pose optimization) raises questions about the necessity and effectiveness of this bi-level optimization scheme. - **Curated dataset filtering process**: The paper lacks detailed information about the filtering process used to curate the datasets, leaving uncertainty about how the quality of complex structures in the datasets is ensured. Without transparent and rigorous filtering criteria, the reliability of the curated datasets may be compromised. - **Computational complexity**: BiDock exhibits higher inference time compared to existing methods like EquiDock and AlphaFold-Multimer. The computational complexity associated with the bi-level optimization process might contribute to this increased inference time, which can impact practical usability and scalability. - **Limited training and testing data**: The paper acknowledges that the available training and testing data are limited, which raises concerns about the generalizability and robustness of the proposed method. Insufficient data may lead to overfitting and an incomplete evaluation of BiDock's performance. - **Incremental novelty**: The paper lacks significant innovation in terms of network structure for BiDock. Instead, it builds upon existing techniques such as cross-modal transformers and roto-translation transformations, which may limit its originality and novelty compared to other state-of-the-art approaches in the field. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please see the Weaknesses part. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes, the authors discuss about limitation of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - We highly appreciate constructive comments from the Reviewer on our work. 1. > (1) The paper does not clearly explain why sequential/coevolution representations are useful specifically for rigid docking problems. (2) Ablation study between implementation with and without sequential information is appreciated. **Answer:** Apologies for any confusion. (1) MSAs capture the evolutionary conservation of interacting residues, providing insights into residue interactions and spatial proximity. This aids in identifying crucial residues at the protein-protein interaction interface and enhances docking accuracy. (2) We perform an ablation study by retraining BiDock with all MSA features set to zero. The results for antibody-antigen docking are shown in the table below, labeled as "w/o MSA" for the variant without MSAs. **Table** Ablation studies on sequential information from MSAs. (bold: best) ||RMSD $\downarrow$|TM-score $\uparrow$|DockQ $\uparrow$|maxDockQ $\uparrow$| | ------------------- | ----- | -------- | ------- | ----- | | BiDock w/o MSA |20.658 $\pm$ 5.141|0.581 $\pm$ 0.094|0.0392 $\pm$ 0.081|0.046 $\pm$ 0.104| |BiDock|**9.707** $\pm$ 8.759|**0.773** $\pm$ 0.187|**0.342** $\pm$ 0.351|**0.414** $\pm$ 0.386| The results indicate the essential role of sequential information from MSAs in enhancing docking performance. This underscores the significance of our framework, which combines multimodal fusion and rigid body docking via bi-level optimization. In the revised version, we will incorporate these explanations and ablation studies to further bolster our contributions. Your valuable feedback is greatly appreciated. 2. > The paper formulates the problem as a bi-level optimization task, but it lacks a clear explanation as to why this approach is chosen over direct end-to-end optimization using the distance map as input. **Answer:** Thanks for your comments. The "direct end-to-end optimization" is equivalent to performing only one gradient descent step in the inner loop, approximating $(R^*, t^*)$ with $(R_1, t_1)$ in Eq. (14). Table 5 demonstrates that performance decreases when using gradient descent in the inner loop for 1000 steps compared to 2000 steps, let alone performing only one step. The reduction in gradient descent steps leads to larger approximation errors. Given this, we naturally opt for bi-level optimization over "direct end-to-end optimization." We value this question and will further emphasize the conclusions shown by the ablation studies in the revised version. 3. > The paper lacks detailed information about the filtering process used to curate the datasets, leaving uncertainty about how the quality of complex structures in the datasets is ensured. **Answer:** Thanks for your comments. The dataset extraction principles and PDB identifiers are detailed in Appendix A. Roughly speaking, the training set comprises 4,890 complexes of antibody-antigen pairs with proteins containing a minimum of 30 residues. These complexes involve three chains, including the light and heavy chains of the antibody and one antigen chain, released before January 2022. Similarly, the test set includes 68 antibody-antigen complexes with three chains, released after October 2022. This ensures no data leakage for baselines or our proposed model. Additionally, our dataset has been selected as a benchmark for the antibody-antigen complex structure prediction task on the "Life Science Leaderboard" (https://www.biomap.com/sota/), facilitating public access and fair comparisons. 4. > BiDock exhibits higher inference time compared to existing methods like EquiDock and AlphaFold-Multimer. **Answer:** Thanks for your comments. Although EquiDock is fast, its performance falls short of traditional software due to its limitations in leveraging coevolution information and simple networks. On the other hand, AlphaFold-Multimer and our proposed BiDock exhibit comparable inference times. Considering the performance improvement of our model, this trade-off is acceptable. Thanks for your thorough review. We will place greater emphasis on this point in the revised version. 5. > The paper acknowledges that the available training and testing data are limited, which raises concerns about the generalizability and robustness of the proposed method. **Answer:** Thanks for your comments. (1) The three test sets are held-out, and both our proposed BiDock and the baselines do not select checkpoints based on the evaluations of test sets. (2) Similar to the evaluation of AlphaFold2 and AlphaFold-Multimer, we save only one checkpoint after convergence and evaluate its performance on all test sets. Notably, the three test sets have distinct characteristics: DB5.5 includes general proteins; VH-VL focuses on light and heavy chains of antibodies; AB-AG concerns antibody-antigen docking. The performance improvement across these diverse test sets reflects the generalization and robustness of BiDock to some extent. We will include these clarifications in the revised version as suggested. Thanks for your feedback. 6. > The paper lacks significant innovation in terms of network structure for BiDock. Instead, it builds upon existing techniques such as cross-modal transformers and roto-translation transformations. **Answer:** Thanks for your comments. (1) Roto-translation transformation is a standard operation for 3D spatial rotation and translation, and Evoformer is a well-established module like Resnet. (2) Our contributions do not center on network structure design, emphasizing these unique aspects: - We design a framework to naturally integrate the fusion of multimodalities and the docking of rigid bodies through bi-level optimization establishing a new avenue for rigid protein docking. - We solve the above bi-level optimization with unrolled gradient and the derived spectral initialization. - The promising results on three representative datasets demonstrate the effectiveness of the proposed model. --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: Thank you for your detailed reply! I appreciate the extra analysis provided, which has indeed alleviated my concern. Nevertheless, I believe the bi-level optimization approach appears straightforward and lacks depth, leading to significant computational complexity. Overall I think the novelty of the proposed method is limited, but the task formulation might provide insight to the AI4Science community. As a result, I have adjusted my rating to a borderline acceptance level. --- Reply to Comment 1.1.1: Comment: We extend our sincere gratitude for acknowledging our rebuttal! We are genuinely delighted to have been able to provide satisfactory explanations for the concerns you raised. Your insights into our work hold significant value, and we are thankful for your readiness to adjust the score accordingly. AI4Science offers vast exploration opportunities. Given the practical nature of this field, we place equal emphasis on model novelty and performance improvements. While our proposed BiDock may not be structurally complex, it marks a good starting point regarding its novel framework and promising performance. In the revised version, we will diligently refine the presentation and incorporate the necessary details from the rebuttal into our paper. Your support is invaluable throughout this process.
Summary: This paper proposes BiDock, a novel rigid protein docking model that integrates sequence and structure information through bi-level optimization. It achieves promising results, outperforming baselines by up to 234% in challenging antibody-antigen docking. Strengths: As claimed by the authors in the paper, this work represents the first attempt to effectively leverage sequence- and structure-modal information for rigid protein docking. Weaknesses: The authors seem to overlook the discussion and comparison of some protein docking methods based on pre-trained models. For example, [1] utilizes the protein language model ESM2 [2] for Protein Docking task, and there are other pre-trained models focusing on protein structures [3] as well as embeddings that combine both sequence and structure information [4]. These methods are worth discussing and applying in the experimental section for a comprehensive evaluation and comparison. In general, applying pre-trained methods directly to downstream protein-related tasks often leads to significant improvements in performance. The paper lacks sufficient details, such as the specific approach for concatenating MSA. It is unclear whether the MSA is directly concatenated or arranged based on homology, similar to AF2-multimer. Providing these implementation details, and making the code available, would enhance the reproducibility and transparency of the research. ​​For antibody-antigen tasks, reporting the RMSDs of the CDRs are crucial. The high variability of CDRs makes predicting this region particularly challenging. Especially, results on CDR3 region is a critical metric for evaluating the model's performance, and its absence in the author's discussion is noteworthy. [1] Chu, Lee-Shin, et al. "Flexible Protein-Protein Docking with a Multi-Track Iterative Transformer." bioRxiv (2023): 2023-06. [2] Lin, Zeming, et al. "Evolutionary-scale prediction of atomic-level protein structure with a language model." Science 379.6637 (2023): 1123-1130. [3] Guo, Yuzhi, et al. "Self-supervised pre-training for protein embeddings using tertiary structures." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 6. 2022. [4] Chen, Can, et al. "Structure-aware protein self-supervised learning." Bioinformatics 39.4 (2023): btad189. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Does the proposed multi-model model have the capability to handle tasks when there is missing structure? Are there any potential improvements or modifications that could address this issue? Do MSAs of the antibody part really work for this task? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The paper's limitations include the absence of explicit geometric constraints between residues in learning the distance map and neglecting potential atom clashes. The authors plan to address these issues and extend the framework to encompass general proteins in future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - We sincerely thank the Reviewer for spending time and providing valuable feedback. We appreciate all of your suggestions and we have addressed all your questions below by providing our responses. 1. > The authors seem to overlook the discussion and comparison of some protein docking methods based on pre-trained models. For example, (1) some pre-trained models focus on protein structures [1] as well as embeddings that combine both sequence and structure information [2]. (2) GeoDock[3] utilizes the protein language model ESM2 [4] for protein docking task. **Answer:** Thanks for your comments. (1) General pre-trained models [1,2] are evaluated on simpler tasks such as classification or GraphQA. However, rigid docking requires a specialized decoder and standardized evaluation criteria is lacking. Following existing rigid docking models [7,8], we exclude general pre-trained models from our baselines. (2) GeoDock [3] is tailored for flexible docking. Firstly, flexible docking contradicts the rigid-body assumption [5,6], rendering it incomparable with rigid docking models. Secondly, GeoDock is released after the submission deadline, explaining its absence in our references. In the revised version, we will cite and discuss these methods in the "Related Work" section to differentiate the focus. 2. > The paper lacks sufficient details, such as the specific approach for concatenating MSA. It is unclear whether the MSA is directly concatenated or arranged based on homology, similar to AF2-multimer. **Answer:** Apologies for the confusion. (1) The extraction of cluster MSA features $F^{msa}\in\mathbb{R}^{N_{cls}\times N_{res}\times 49}$ is detailed in Appendix B. Roughly, we employ the heuristic approach from AlphaFold-Multimer to pair sequences, then utilize the MSA clustering method like AlphaFold2. (2) As mentioned in the paper, Eq. (6) requires broadcasting operations. Specifically, ${M^{typ}\in\mathbb{R}^{N_{res}\times c_m}}$ is broadcasted along the newly added first dimension during addition. Similarly, broadcasting is applied to the newly added first dimension of $M^{ang}$ for concatenation. Thanks for your careful review. We will provide clear explanations in the revised version. 3. > For antibody-antigen tasks, reporting the RMSDs of the CDRs are crucial. **Answer:** Thanks for your comments. Rigid docking [5,6] ensures that the conformation of the antibody is rigid, resulting in identical CDRs across all methods. Therefore, a comparison in this regard becomes unnecessary. 4. > Does the proposed multi-model model have the capability to handle tasks when there is missing structure? **Answer:** Thanks for your comments. (1) Rigid docking [5,6] assumes access to 3D structures of unbound proteins. (2) When only amino acid sequences are available, advanced folding models like AlphaFold2 can predict the 3D structures of unbound proteins. 5. > Do MSAs of the antibody part really work for this task? **Answer:** We perform an ablation study by retraining BiDock with all MSA features set to zero. The results for antibody-antigen docking are shown in the table below, labeled as "w/o MSA" for the variant without MSAs. **Table** Ablation studies on sequential information from MSAs. (bold: best) ||RMSD $\downarrow$|TM-score $\uparrow$|DockQ $\uparrow$|maxDockQ $\uparrow$| | ------------------- | ----- | -------- | ------- | ----- | | BiDock w/o MSA |20.658 $\pm$ 5.141|0.581 $\pm$ 0.094|0.0392 $\pm$ 0.081|0.046 $\pm$ 0.104| |BiDock|**9.707** $\pm$ 8.759|**0.773** $\pm$ 0.187|**0.342** $\pm$ 0.351|**0.414** $\pm$ 0.386| The results indicate the essential role of sequential information from MSAs in enhancing docking performance. This underscores the significance of our framework, which combines multimodal fusion and rigid body docking via bi-level optimization. In the revised version, we will incorporate these ablation studies to further support our contributions. We value your feedback. [1] Guo, Yuzhi, et al. "Self-supervised pre-training for protein embeddings using tertiary structures." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 6. 2022. [2] Chen, Can, et al. "Structure-aware protein self-supervised learning." Bioinformatics 39.4 (2023): btad189. [3] Chu, Lee-Shin, et al. "Flexible Protein-Protein Docking with a Multi-Track Iterative Transformer." bioRxiv (2023): 2023-06. [4] Lin, Zeming, et al. "Evolutionary-scale prediction of atomic-level protein structure with a language model." Science 379.6637 (2023): 1123-1130. [5] Vakser I A. Protein-protein docking: From interaction to interactome[J]. Biophysical journal, 2014, 107(8): 1785-1793. [6] Desta I T, Porter K A, Xia B, et al. Performance and its limits in rigid body protein-protein docking[J]. Structure, 2020, 28(9): 1071-1081. e3. [7] Yan Y, Tao H, He J, et al. The HDOCK server for integrated protein–protein docking[J]. Nature protocols, 2020, 15(5): 1829-1852. [8] Ganea O E, Huang X, Bunne C, et al. Independent SE (3)-Equivariant Models for End-to-End Rigid Protein Docking[C]//International Conference on Learning Representations. 2022. --- Rebuttal Comment 1.1: Comment: Thank the authors for providing detailed explanations. My concerns have been addressed, and I have raised my score. --- Reply to Comment 1.1.1: Comment: We are delighted to have been able to provide satisfactory explanations for the questions you raised. We greatly value your time and attention to our rebuttal, and we sincerely appreciate your willingness to consider adjusting the score accordingly. We will strive to enhance the presentation and incorporate the additional experiments from the rebuttal into our paper. Your support throughout this process is truly invaluable.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Differentially Private Approximate Near Neighbor Counting in High Dimensions
Accept (spotlight)
Summary: This paper studies the problem of answering approximate range queries privately. The paper presents a data structure to answer queries privately that helps avoid dimension dependence in the additive error for utility but incurs a multiplicative factor. The paper also showcases how to efficiently implement the data structure. Strengths: The paper is well structured, easy to follow with adequate discussion and comparison with prior work. The approach introduced provides a trade-off between additive error incurred and approximation factor for the query, which to the best of my knowledge is a novel contribution. The lower bound is for a restrictive case but understanding this dependence is a nice open problem. Weaknesses: The privacy guarantee is limited to $(\varepsilon, \delta)$-DP and not pure differential privacy. Some discussion on limitation or approach for extension to pure differential privacy might be helpful. Also, it is not clear if the multiplicative factor in utility is necessary or inherent to this approach. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Q1: Though $\varepsilon < 1$ is preferable for privacy, is there a barrier in extending Theorem 1.1 to case when $\varepsilon$ is $> 1$? Q2: How does the bound in Theorem 1.1 compare to other approaches for the case of $c=1$? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Discussed in weakness section. Since work is of theoretical nature, negative societal impact is not apparent. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: W1: The privacy guarantee is limited to (ε,δ)-DP and not pure differential privacy. Some discussion needed. A: We could in fact obtain pure DP, although in this case the algorithm would suffer from a small probability of having large runtime. We focused on approximate DP for simplicity. To convert our algorithm into a pure DP algorithm, in lines 19-21 of the data structure, we add regular Laplace noise with parameter eps/2 (not truncated) to ensure pure DP, and replace c_v with 0 if it does not exceed roughly 1/eps * n^{o(1)} for some sufficiently large n^{o(1)} parameter. (The \Delta in line 20 is a typo and should be 1, we will fix( it.) The idea is that when the true count of a region is 0, its noisy count will not exceed the threshold with very high probability, but there is some small probability that it does exceed the threshold. We can compute the probability \zeta that none of the truly empty regions exceeds the threshold after adding noise by simulating a Binomial distribution. In the high probability event that this holds, we do not need to store any of the leaves. Otherwise, there is a small probability that we will store some (or even all) of the leaves and their noisy counts, which makes both the runtime and accuracy much worse. However, in expectation, the runtime and accuracy are still as stated. We will update the paper to discuss this more rigorously. W2: Not clear if the multiplicative factor in utility is necessary or inherent to this approach. A: The cardinality multiplicative factor is quite small (1+o(1)), but getting an algorithm that removes this factor (or lower bound showing its necessary) is an interesting open question. Q1: Though ε<1 is preferable for privacy, is there a barrier in extending Theorem 1.1 to case when ε is >1 A: There does not seem to be any barrier, although for a somewhat technical reason. Specifically, if eps > 1, we can of course get the same bounds as when eps = 0.5. Furthermore, as long as eps = n^{o(1)}, we still get the same theorem, because in the expression n^{\rho+o(1)} the o(1) term becomes larger but remains o(1). If eps is larger than n^{o(1)} the privacy guarantees are so weak that it is unclear whether the guarantees are meaningful. Still, the theorem might hold nevertheless. We will investigate this and discuss it in the final version of the paper. Q2: How does the bound in Theorem 1.1 compare to other approaches for the case of c=1 A: There are generic techniques that work for arbitrary ranges (including Euclidean balls in high dimension) and achieve \sqrt{n} additive error, such as the paper [HR10] which we cite on line 124. In our setting, one can think of the Q and U terms being exponential in the dimension, though in our setting the dimension can be assumed as logarithmic in n by standard dimension reduction procedures. Hence, their bound is roughly proportional to sqrt{n}. To the best of our knowledge, no other results are known for high-dimensional Euclidean space. Our bound is only n for c = 1, as we mainly focused on getting the right dependence for larger approximation constants c. We will discuss the comparison of [HR10] with our work in more detail, and if it is possible to combine our techniques with [HR10] to get improved results for smaller c, we will include this in our final version of the paper. --- Rebuttal Comment 1.1: Title: Response Comment: I thank the authors for their response. I believe including some discussions on these would make the results more complete and so I'm happy to increase my score.
Summary: The papers shows how to use a variant of LSH to approximately count the neighbors in an r-ball (r fixed apriori). It shows how to obtain a differentially private LSH sketch. It analyzes the algorithm’s theoretical properties. Strengths: The technical contributions look solid and the analyses seem correct, e.g. using a weaker Markov inequality to handle non-independence of points. It shows how to avoid an exponential dependence on the dimension. The algorithm itself is fairly clear. The paper makes an interesting claim that their method also leads to the most space efficient approximate nearest neighbor search. Weaknesses: The presentation leaves a great deal to be desired. After the introduction, the exposition generally seems hastily arranged. Much of it is spent on the detailed analysis without offering much insight into either the algorithm or a high level overview of the proof strategy. The analysis skips details and crams equations into paragraphs which makes it harder to read. Since the proof is not short and many of the proofs are already in the appendix, it’d make sense to only outline the proof. There is zero empirical validation. The result and algorithm are purely theoretical contributions. There doesn’t seem to be any exposition supporting the claim that the paper yields the most space efficient known approx nearest neighbor search. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. The data structure created appears to be a (completely) random forest. What are the relationships between this work and differentially private random forests? 2. Where do the 0.9 probability using Markov’s and 99% probability using Chernoff’s come from? I don’t see any other constants chosen that ensure these probabilities are reached. 3. The extension to data not on a sphere is completely punted to the appendix. Can you give some insight on how this works, under what conditions it works, and what the impact to privacy is in the main text? 4. The overview says that you “force” the LSH algorithm to create proper partitions. But where do you do this? Is it from the tree structure with depth? (as opposed to a stump-like structure) And doesn’t a single projection in LSH partition points into buckets so you can do fast lookups? Or by partition do you mean some fine partitioning? 5.Can the structure answer a knn like query? I.e. What is the minimum radius r which contains k neighbors? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: W1: The presentation leaves a great deal to be desired. A: Apologies if the presentation was unclear. We will implement the reviewer’s suggestions in the final version of the paper. W3: There doesn’t seem to be any exposition supporting the claim that the paper yields the most space efficient known approx nearest neighbor search. A: We assume that the reviewer refers to the sentence “This is of separate interest, as this yields the most efficient algorithm for approximate nearest neighbor search with space O(nd), improving over [Kap15]”. [Kap15] obtained a data structure with O(nd) space and query time dn^rho for rho=4/(c^2+1). Note that this gives a non-trivial query time only for c>\sqrt(3). In contrast, our data structure obtains query time with rho=4c^2/(c^2+1)^2 which is smaller than 1 for all c>1. Q1: The data structure created appears to be a (completely) random forest. What are the relationships between this work and differentially private random forests? A: Thank you for an interesting suggestion. We are not experts on differentially private random forests, but after a cursory examination of the literature we believe that random forests are typically data-dependent, which makes it hard to guarantee privacy for range queries. (For the same reason, in our paper we are using data-independent instead of data-dependent LSH, despite the fact that the latter has slightly better bounds.). If the reviewer has concrete suggestions for random forest papers we should compare our methods to, we will perform a more in-depth analysis. Q2: Where do the 0.9 probability using Markov’s and 99% probability using Chernoff’s come from? I don’t see any other constants chosen that ensure these probabilities are reached. A: We use 0.9 and 99% only to simplify the presentation. These probabilities can be made arbitrarily close to 1 by adjusting the big-Oh constant in the appropriate parameters (the error bound for Markov and the number of repetitions for Chernoff). Q3: The extension to data not on a sphere is completely punted to the appendix. Can you give some insight on how this works, under what conditions it works, and what the impact to privacy is in the main text? A: We moved this entirely to the appendix as it is essentially a corollary of Bartal, Recht, and Schulman [BRS11], as we note in lines 338-341. We convert solving the nearest neighbor problem on the sphere with radius r into solving the problem in Euclidean space (not on the sphere) with the same radius. The high level idea of the BRS11’s result is to provide an efficient map from Euclidean space onto the unit sphere, which ensures close points stay close and far points stay far. We use this result as a black box, and this reduces solving the problem on Euclidean space to solving it on the sphere, which is our main algorithm. There is a slight distortion (which BRS11 rigorously quantifies), but it is insignificant for our purposes. Because we perform the private computation after performing this embedding first, we do not sacrifice privacy. Q4: The overview says that you “force” the LSH algorithm to create proper partitions. But where do you do this? A: The specific place in Algorithm 1 where this is done is line 17. Breaking out of the “for i” loop ensures that we map each point p to only one child of the node v. Q5: Can the structure answer a knn like query? We believe that given our data structure, one can use a similar approach to [HY21], and approximately retrieve the distance to the k-nn. In particular, given the parameter k, the goal is to retrieve r such that $r_k \leq r \leq (1+\alpha) r_{k(1+o(1))+2t}$ where $r_i$ is used to denote the smallest radius around $q$ such that the ball $B(q,r_i)$ contains at least $i$ points. The approach of [HY21] is to roughly perform a binary search to approximately find the right radius at which there are at least k points. Please refer to Lemma 13 and Theorem 14 of [HY'21]. We note, however, that there are two differences in our case: * [HY21] assumes that the data points are from the discrete cube [u]^d and thus the minimum and maximum radius are bounded by 1 and $(\sqrt d)u$. Our algorithm does not have this restriction and therefore, in order to answer k-nn type queries, one needs to further assume a bounded aspect ratio. Given that the minimum distance to any point is 1 and the maximum distance is $\sqrt d u$, the algorithm of [HY21] queries the data structure $g = log_{1+\alpha} (\sqrt d u)$ many times to find the right radius at which there are at least k points included in the data structure. * However in our case, our algorithm works for a fixed r, and thus we now need to maintain g different data structures. Thus we need to use stronger privacy parameters in each data structure so that using composition theorems the overall privacy is preserved. This is as opposed to [HY21] where they did not need a separate data structure for different values of r and could reuse their data structure to answer queries for all values of r. We will verify this, and if correct, include this observation in the final version. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. I've bumped up the rating.
Summary: This paper provides a new polytime algorithm for differentially private approximate near neighbor counting---that is, privately counting the number of points inside l2 balls of fixed radius r, or more precisely a relaxation that may answer any value between the number of datapoints in B(x,r) and the number of datapoints in B(x, c*r) for parameter c. Existing algorithms incur either additive error that is at least a fixed polynomial in the number of points n or additive error that is logarithmic in n but grows exponentially in the dimension d. The algorithm in this paper incurs additive error that is n^(O(1/c^2)) where c is the parameter above, which can be made an arbitrarily small polynomial in n and has no dependence on d, at the cost of also introducing a small multiplicative error term. (As a side result, the paper also provides a lower bound showing that for L-infinity balls, it is not possible to have additive error n^(o(1)) for constant c. It poses the more immediately relevant lower bound question for L2 balls as an open question.) Strengths: This is a nice result that improves on the additive accuracy of known results when both n and d are large. The algorithm is natural and interesting, and the paper is well-written and enjoyable. Weaknesses: It's not clear whether the multiplicative approximation factor is necessary to get a good additive dependence on both n and d. (It's perfectly reasonable to trade off a small multiplicative factor for a better additive term, so this is a bit of a nitpick. However, it does mean that the result is incomparable rather than strictly better compared to prior work even in the regime when n and d are both large.) Relatedly, the lower bound (in addition to being for l_\infty rather than l_2) does not take into account the multiplicative approximation, so it's not clear whether it's possible to obtain additive error n^{o(1)} for constant c if we also allow a multiplicative error. Minor comments: The abstract should mention the multiplicative aspect of the approximation. 42: Maybe say explicitly \ell_2 ball here, not just ball. 69: The accuracy guarantee should only hold with high probability for the m queries q_1, ..., q_m, not for every q\in \mathbb{R}^d. 128: I'm not positive, but I think one of the interval query results of BNSV15 has been improved in subsequent work to reduce the gap between the upper and lower bounds. 200: It's technically not a partition, since some points are not sent to any child. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1) Does the algorithm extend to other norm balls or other natural families of queries? 2) Is there hope of removing the multiplicative approximation factor? Or else reason to believe that it is necessary? 3) Do you think the analysis could go through for adaptive queries as well, or is it clear that the restriction to non-adaptive queries is necessary? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: W1: It's not clear whether the multiplicative approximation factor is necessary to get a good additive dependence on both n and d. ... Relatedly, the lower bound (in addition to being for l_\infty rather than l_2) does not take into account the multiplicative approximation. A: Indeed, we do not know the answer to this question. The cardinality multiplicative factor is quite small (1+o(1)), but getting an algorithm that removes this factor (or lower bound showing it is necessary) is an interesting open question. Q1: Does the algorithm extend to other norm balls or other natural families of queries? A: The algorithm should extend easily to L1 balls, with somewhat worse performance bounds. This is because the L1 norm can be embedded into the L_2 norm squared, which (for range queries) is equivalent to the L_2 norm, though the approximation factor c becomes c^2. We will verify this, and if correct, include this observation in the final version. Q3: Do you think the analysis could go through for adaptive queries as well, or is it clear that the restriction to non-adaptive queries is necessary? A: We believe that it should be possible to extend the analysis to adaptive queries, at the cost of multiplying the additive error and runtime by a factor roughly d. The idea is that it should be possible to round every query point to an r-net, and then we just need an algorithm able to answer any query among (1/\r)^O(d) queries accurately with failure probability (1/\r)^{-O(d)}, so that by a union bound all queries are answered correctly. Such a data structure can be constructed by replicating the data structure in the paper d log(1/r) times. Since we need many replicas, each copy needs privacy with parameter eps replaced by eps/d, which multiplies the error/runtime by a factor of poly(d). We will verify this, and if correct, include this observation in the final version.
Summary: In this work, the authors propose a differentially private data structure to approximately count the number of data points from within a dataset that lie within a certain small radius of a query point. The preliminary data structure, intended for datasets that lie on a unit sphere, recursively splits the region into small caps. This is represented using a T-ary tree of height K. The leaf nodes of this tree maintain counts of all the points that lie within their corresponding regions. Privacy is obtained by perturbing these points using a truncated Laplacian mechanism. The proofs follow from simple Gaussian concentration bounds and standard Laplacian mechanism. Strengths: - The problem is relevant and interesting. - The solution is simple and elegant and circumvents the drawbacks of prior works. - The authors provide a novel insight for using Locality sensitive hashing schemes for approximate counting. - The presentation is very clear and concise. Weaknesses: - The improved results only hold for small values of r which is fixed in advance and is not a part of the input. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Can you convert any data-independent LSH with LSH constant \rho to construct an approximate near neighbor counting using a similar tree structure in a black-box fashion? For instance, each internal node will correspond to all the points from the parent node that fall within a certain region. The error estimates will be analyzed in a similar fashion and will be guaranteed by the LSH parameters. The insight comes from the \rho used in the theorem statement whose value is exactly the optimal LSH constant for data-independent schemes. - How about data-dependent LSH? The construction of Andoni & Razenshteyn provides a better LSH constant and does a recursive splitting that looks similar to the one used in this work. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: W: The improved results only hold for small values of r which is fixed in advance and is not a part of the input. A: It is true that our algorithm assumes that r is fixed. However, it does not need an assumption that r is small, see Appendix A.2. Q1: Can you convert any data-independent LSH with LSH constant \rho to construct an approximate near neighbor counting using a similar tree structure in a black-box fashion? For instance, each internal node will correspond to all the points from the parent node that fall within a certain region. The error estimates will be analyzed in a similar fashion and will be guaranteed by the LSH parameters. The insight comes from the \rho used in the theorem statement whose value is exactly the optimal LSH constant for data-independent schemes. A: The answer depends on whether privacy is considered. Without the privacy constraint, it is indeed possible to transform any LSH scheme for ANN search into ANN counting data structure. But once we consider the privacy constraint, there does not seem to be a natural way to do so. Specifically, our algorithm builds a decision tree that partitions that space/dataset into buckets, where the query looks up a subset of buckets. Constructing such a structure for an arbitrary LSH family is, to our knowledge, an open problem. Q2: How about data-dependent LSH? The construction of Andoni & Razenshteyn provides a better LSH constant and does a recursive splitting that looks similar to the one used in this work. A: Even though data-dependent LSH indeed gives better exponents, it seems particularly challenging to leverage it in a differentially private algorithm. One challenge is that the space partitions intrinsically depend on the dataset, and hence it may be hard to control the privacy leakage here. Another challenge is that all known data-dependent LSH schemes are not pure space partitions --- a crucial condition we use to obtain the 1+o(1) multiplicative approximation factor --- and would instead give a super-constant factor. --- Rebuttal Comment 1.1: Comment: Thanks for entertaining my questions. I enjoyed reading the paper.
Rebuttal 1: Rebuttal: We thank all reviewers for their useful comments and feedback. We will fix the typos and presentation issues in the final version of the paper. In what follows we address the issues identified by the reviewers as weaknesses and/or listed as questions.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
WalkLM: A Uniform Language Model Fine-tuning Framework for Attributed Graph Embedding
Accept (poster)
Summary: This paper studies the universal language model for generic graph representation learning. To model the complex attributes on multiple types of nodes and links with the consideration of graph structure, the authors proposed WalkLM. Specifically, to compose meaningful textual sequences, the attributed RWs and an automated program are exploited. Besides, to transfer the capability of language models, a fine-tuning strategy is designed to extract embedding vectors from the LM. Extensive experiments on real-world datasets show the superiority of proposed method over the state-of-the-art methods. Strengths: S1. This paper is well motivated. Considering the graph representations are generally limited to specific downstream predictions, and the performance may be unsatisfactory with unsupervised GNN ways, the authors integrate language models (LMs) and random walks (RWs) to obtain unsupervised generic graph representations S2. The paper is well written and organized, making it easy to follow for readers and the work archives promising different downstream task results. S3. The problem setting is quite realistic, and the experimental results are also encouraging. In my opinion, this work has potential to promote the application of general-purpose graph methods to more real-world problems. S4. The adoption of random walk is a quite highlight part of this paper, which can easily capture flexible graph topological structures without supervision for various independent downstream tasks. Weaknesses: W1. It is suggested to include the deeper analysis on why the proposed framework can stay strong with a small size of training data for results in the few-shot setting. W2. The analysis of the some hyperparameters need to be considered, where the hyperparameters may have an impact on the performance of the downstream tasks. For example, the latent dimension d and the number of masked samples. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: NA Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: See the Weaknesses part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewers for their positive and detailed comments, which affirm the importance of the problem we studied. In response to the weaknesses, our answers are as follows: > **W1:** It is suggested to include the deeper analysis on why the proposed framework can stay strong with a small size of training data for results in the few-shot setting. Through the novel textualization process that converts general attributed graphs into text-like sequence data, our proposed WalkLM can leverage the capabilities of modern language models for graph representation learning. With the extensive pre-training of LMs on broad text corpora, the model can easily understand meaningful node attributes given a new graph, while the random walk strategy further allows it to capture graph structures. This is how WalkLM can consistently exhibit superior performance in the few-shot setting. > **W2:** The analysis of the some hyperparameters need to be considered, where the hyperparameters may have an impact on the performance of the downstream tasks. For example, the latent dimension d and the number of masked samples. We have analyzed specific hyperparameters in Section 5.5 of our paper (i.e., the number of sampled walks $N$ and the termination probability $\alpha$). Furthermore, we have added experiments on the suggested hyperparameters. Regarding the latent dimension $d$, it's a common practice to utilize the output dimension of the original LMs (e.g., the dimension in GPT is 768 [1]). For the ratio of masked samples $m$, the specific results are listed as follows, where the optimal value across different tasks is 0.15, which is consistent with the empirical selection in our paper and the previous work [2]-[4]: **Table 1: Different downstream task results (%) with varing $m$ on PubMed.** | Dataset | | PubMed | | | | ------------ | ------------------- | -------- | --------------- | ----- | | Task | Node Classification | | Link Prediction | | | Metric | Macro-F1 | Micro-F1 | AUC | MRR | | $m=0.05$ | 52.97 | 56.33 | 83.16 | 93.47 | | $m=0.15$ | **60.42** | **62.34** | **85.65** | **94.16** | | $m=0.25$ | 53.80 | 56.09 | 82.92 | 93.75 | | $m=0.35$ | 52.22 | 55.61 | 82.38 | 92.72 | **Reference** [1] Improving language understanding by generative pre-training, 2018. [2] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, Proceedings of naacL-HLT, 2019. [3] RoBERTa: A Robustly Optimized BERT Pretraining Approach, arXiv e-prints, 2019. [4] Clinical-BERT: Vision-Language Pre-training for Radiograph Diagnosis and Reports Generation, AAAI, 2022. --- Rebuttal Comment 1.1: Title: Response after rebuttal Comment: The authors’ rebuttal addressed most of my concerns and I am happy to see the paper accepted (raising my score from 7 to 8). --- Reply to Comment 1.1.1: Title: Thanks for the response Comment: Dear reviewer DtxK, We thank your response and appreciation of our work and rebuttal. We will make sure to incorporate the new results and discussions into our revision. Best, Authors
Summary: GNNs for training require sufficient training data for downstream tasks to achieve strong performance. Self supervised learning approaches are inefficient due to the presence of a variety of node attributes and complicated relations between nodes. Inspired from the success in LLMs, they convert the graphs into natural language to be consumed by pre -trained language models. Random walk sequences are extracted from the graph and are converted into text through entity level and walklevel textualization. Roberta model is then fine tuned to predict the masked node / edges as textual sequences. The approach is tried on various node classification and link prediction tasks with significant improvements. Strengths: 1. Since the main crux of the approach is how we textualize the graphs. The method is highly flexible to extract node, edge, sub-graph, path-specific or even graph embeddings. 2. I like how self-sufficient the paper is. All essential material to understand and analyze has been fitted into the main paper. 3. The technique has been compared to a lot of standard GNN approaches and in the ablation LLM + GNN have been combined to further enhance performance and use the best of both worlds. 4. Overall the approach is elegant and easy to understand. It brings huge improvements to downstream tasks. I think the approach is a significant contribution to the domain of LLMs and graphs. Weaknesses: 1. The authors do not present results on the graph level classification task. Only node classification and link prediction tasks have been presented. It would be interesting to see if aggregating node embeddings for graph-level tasks also performs well as this task needs more context as compared to node or edge classification. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Do the authors have an opinion on - applying the same technique for nontextual networks with nontextual Ids and numeric attributes? I think the bigger question is how well can transformer-like models embed graph structures without the necessity of using GNN-like graph embeddings. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. The process of converting graphs to text as presented in the paper consists of entity level textualization which can be taxing as the number and types of nodes increase to millions. 2. Extracting random walks is an expensive operation and this method will have scalability issues when the amount of nodes/edges increases to millions or billions Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the overall positive summary and accurate description of our major contributions. As for the several weaknesses you mentioned, our responses are listed as follows: > **W1:** The authors do not present results on the graph level classification task. Only node classification and link prediction tasks have been presented. It would be interesting to see if aggregating node embeddings for graph-level tasks also performs well as this task needs more context as compared to node or edge classification. We appreciate your constructive suggestion. Graph-level classification presents its own set of challenges, which require holistic capturing of graph structures and often do not rely much on attributes. Therefore, it is difficult to find a universal representation learning approach that solves all different levels of graph mining tasks. Technically, adapting our method to graph-level classification necessitates some subtle decisions to make (such as whether to include graph ID as a virtual node). Following the reviewer's suggestion, we've conducted a preliminary analysis on aggregating our learned node embeddings for graph-level tasks. Specifically, we adopt a widely-used MUTAG dataset and use mean accuracy as the metric [1][2]. The results on the popular MUTAG dataset are listed in Table 1. Although the findings are encouraging and show the potential of WalkLM, further studies are still needed to establish a clear advantage of our approach over SOTA graph classification baselines. **Table 1: Accuracy results (%) of graph-level classification on MUTAG.** | Dataset| | | | MUTAG | | | | :---: | :---: | :-----: | :----: | :----------: | :------------------: | :-------: | | Model | ConvE | ComplEx | HinVec | LM(DRoBERTa) | WalkLM w/o. graph-ID | WalkLM | |Accuracy | 77.64 | 78.69 | 78.72 | 79.23 | 79.77 | **81.39** | **Reference** [1] S2GAE: Self-Supervised Graph Autoencoders are Generalizable Learners with Graph Masking, WSDM, 2023. [2] An end-to-end deep learning architecture for graph classification, AAAI, 2018. > **Q1:** Do the authors have an opinion on - applying the same technique for nontextual networks with nontextual Ids and numeric attributes? I think the bigger question is how well can transformer-like models embed graph structures without the necessity of using GNN-like graph embeddings. Thanks for the question. In this work, we apply RW to help LM in capturing graph structures. In our experiments, we have found that nontextual IDs can be effectively embedded as new tokens that appear in different random-based sequences. However, we have not yet found an effective way to model fully nontextual networks and numeric attributes. It is promising to establish our framework for more general graph representation along with the active research on more powerful and transparent LMs. > **L1:** The process of converting graphs to text as presented in the paper consists of entity level textualization which can be taxing as the number and types of nodes increase to millions. We only need to perform the rule-based textualization once for every node during the pre-processing stage, which is not only pretty fast but also highly amenable to parallelization. > **L2:** Extracting random walks is an expensive operation and this method will have scalability issues when the amount of nodes/edges increases to millions or billions. Instead of being a limitation, using random walks for capturing graph structures is in fact very efficient and scalable, which is an advantage of our method. Specifically, in industry, random walks on large graphs such as those with millions to billions of nodes can be done on CPUs with terabytes of memory with numerous threads in parallel [1]. **Reference** [1] Pixie: A system for recommending 3+ billion items to 200+ million users in real-time, WWW, 2018.
Summary: The paper proposed a method for knowledge-graph-embedding (KBE) task using integration of language model and random walks. Specifically, authors first verbalized the path via random walks in KB, then, fine-tuning language model for the verbalized path, finally, using the embedding layer of language model as the embedding of KB. Also, the numerical studies compared 9 different baseline graph embedding methods on downstream tasks node classification and link prediction. The proposed method WalkLM shows a good performance compared to listed baselines. Strengths: 1. The paper is well organized and easy to follow. 2. The proposed method WalkLM shows a good performance compared to baselines. Weaknesses: 1. My top concern is the novelty. The proposed method simply fine-tuned a language model based on path from random walk. However, similar ideas have been widely explored by previous papers like [1]-[4]. 2. In section 2 related work, a lot of related papers about KB + language model besides [1]-[4] are not included. [1] KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation [2] Exploiting Structured Knowledge in Text via Graph-Guided Representation Learning [3] Deep Bidirectional Language-Knowledge Graph Pretraining [4] Jaket: Joint pre-training of knowledge graph and language understanding Technical Quality: 2 fair Clarity: 3 good Questions for Authors: It would be great to include more recent KB + language works as baseline to show the good performance of presented method. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments. Our responses are listed as follows: > **W1:** My top concern is the novelty. The proposed method simply fine-tuned a language model based on path from random walk. However, similar ideas have been widely explored by previous papers like [1]-[4]. The setting and technical design of our method are new, where we leverage the capability of modern LMs for general attributed graph representation learning via a novel graph textualization process. The approach is concretely backed up by graph theory and properly leverages the advantages of random walks, which have been shown effective in capturing flexible graph topological structures. Note that, we do not intend to claim much novelty in the LM fine-tuning process. However, we believe that fine-tuning LMs for vertical or non-textual domain data and tasks can indeed lead to important discoveries, and many impactful contributions in the field have hinged on endeavors as such [1]-[5]. **Reference** [1] Training language models to follow instructions with human feedback, NeurIPS, 2022. [2] Fine-tuning language models to find agreement among humans with diverse preferences, NeurIPS, 2022. [3] Generating training data with language models: Towards zero-shot language understanding, NeurIPS, 2022. [4] Solving Math Word Problems via Cooperative Reasoning induced Language Models, ACL, 2023. [5] Language Models Get a Gender Makeover: Mitigating Gender Bias with Few-Shot Data Interventions, ACL, 2023. > **W2:** In section 2 related work, a lot of related papers about KB + language model besides [1]-[4] are not included. We thank the reviewer for highlighting the related work on KB+LM. We will add the discussion such as regarding [1]-[4] in our related work. However, we want to emphasize that, the goal of this work is to leverage the capability of modern LMs for general attributed graph representation learning, instead of (1) leveraging KB for enhancing LMs [1]-[5] or (2) leveraging LM for KB completion [6]-[9]. Therefore, we believe this work is novel and significantly different from the existing work on KB+LM, and we will further clarify this in our revision. **Reference** [1] KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation, TACL, 2021. [2] Exploiting Structured Knowledge in Text via Graph-Guided Representation Learning, EMNLP, 2020. [3] Deep Bidirectional Language-Knowledge Graph Pretraining, NeurIPS, 2021. [4] Jaket: Joint pre-training of knowledge graph and language understanding, AAAI, 2022. [5] Knowledge Enhanced Contextual Word Representations, EMNLP-IJCNLP, 2019. [6] Fusing topology contexts and logical rules in language models for knowledge graph completion, Information Fusion, 2023. [7] Multi-task learning for knowledge graph completion with pre-trained language models, COLING, 2020. [8] From discrimination to generation: Knowledge graph completion with generative transformer, WWW, 2022. [9] Multilingual Knowledge Graph Completion from Pretrained Language Models with Knowledge Constraints, ACL, 2023. > **Q1:** It would be great to include more recent KB + language works as baseline to show the good performance of presented method. As we discussed in the response to W2, our focus in this work is to leverage LMs for general attributed graph representation learning, which cannot be adequately achieved by existing KB+LM works. Although some studies such as [1] can perform KB representation learning, they do not apply to general attributed graphs (KBs usually do not have attributes). Furthermore, as suggested by Reviewer vuBn, we have added one KB dataset (i.e., Freebase [2]) to enhance the experimental results, which shows the generalizability of our method to actual KBs. **Table1: Different downstream task results (%) on FreeBase.** | Dataset | | FreeBase | | | | ------------ | ------------------- | --------- | --------------- | --------- | | Task | Node Classification | | Link Prediction | | | Metric | Macro-F1 | Micro-F1 | AUC | MRR | | M2V | 25.74 | 50.25 | 80.68 | 88.97 | | HIN2Vec | 15.56 | 43.67 | 80.04 | 90.90 | | ConvE | 25.13 | 49.31 | 88.14 | 93.57 | | ComplEx | 20.25 | 49.43 | 84.01 | 91.46 | | RGCN | 15.37 | 45.86 | 82.75 | 91.52 | | HAN | 14.25 | 39.30 | 80.73 | 91.61 | | HGT | 19.97 | 47.99 | 81.94 | 89.65 | | HeCo | 23.95 | 48.62 | 79.32 | 87.40 | | SHGP | 13.83 | 39.07 | 78.37 | 85.52 | | LM(DRoBERTa) | 51.76 | 69.51 | 79.22 | 91.21 | | WalkLM | **55.01** | **71.36** | **92.11** | **96.54** | **Reference** [1] KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation, TACL, 2021. [2] Heterogeneous network representation learning: A unified framework with survey and benchmark, TKDE, 2020. --- Rebuttal Comment 1.1: Title: Thank you for the reply Comment: Thank you for the follow up. The weakness of missing KB+LM method discussion/comparison is not well addressed. The difference between applying WalkLM to KB (like Freebase) and attributed graph is not clear. I would like to keep the rating as borderline reject. --- Reply to Comment 1.1.1: Title: Thanks for the response and some clarifications Comment: We thank the reviewer again for the response. Although we have tried to make it clear in the rebuttal, here we want to emphasize again the two points raised by the reviewer above: - The goal of this work is to **innovatively apply LM for general attributed graph representation learning**. As we have discussed in the answer to W2 above (and we will incorporate discussions as such into the revision), all existing studies on KB+LM are about **using KB to enhance LM** and/or **using LM to improve KB completion/reasoning**, both of which are rather different from our work. - KB/Freebase can be regarded as one type of attributed graphs, with attributes as simple as the node names. Thus WalkLM is not really designed for KB/Freebase, but it can be easily applied to KB/Freebase and yield reasonable performance. We hope this can further assist the reviewer to properly understand our work.
Summary: This paper proposes WalkLM, an unsupervised graph representation learning leveraging the power of the language model. WalkLM first samples a set of sequences of entities from attributed graphs by random walk and fine-tunes the language model on textualized walks. The learned embedding by these procedures is employed to node-level and edge-level downstream tasks. The authors demonstrate that WalkLM significantly outperforms baselines in node classification and link prediction tasks on two real-world datasets. Strengths: WalkLM is a neat idea that combines graph representation learning and language modeling. This paper shows how the classic random walk approach can enhance graph representations using the expressiveness of the language model. The authors conduct various experiments including ablation studies, hyperparameter analysis, qualitative analysis, and efficiency analysis. There is no particular reason to reject the paper. Weaknesses: One weakness is the number of datasets used for evaluation. Enough experiments have been done, but only two datasets are covered in this paper. This paper proposed a new data format and corresponding training method beyond proposing a just new model architecture. So, it is necessary to clarify whether there are plenty of scenarios that this method can be easily used in the real world. What are the conditions of the tasks and datasets to which this method can be applied, and what datasets can be employed for WalkLM besides the two datasets? If possible, further experiments should be done on those datasets. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the weaknesses section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the overall positive evaluations and detailed suggestions. In the following, we focus on the main issues to provide our feedback: > **1:** The conditions of the tasks and datasets to apply, and the number of datasets used for evaluation. The target of this work is general attributed graph representation learning, and the proposed WalkLM can learn node representations that can be used for various downstream tasks (e.g., disease prediction and topic modeling) and datasets (e.g., PubMed and MIMIC-III). In response to the reviewer's suggestions on adding more datasets for experiments, we have extended our experiments to include a dataset of FreeBase (as used in [1]). The results further confirm the generalizability of our proposed method: **Table1: Different downstream task results (%) on FreeBase.** | Dataset | | FreeBase | | | | ------------ | ------------------- | --------- | --------------- | --------- | | Task | Node Classification | | Link Prediction | | | Metric | Macro-F1 | Micro-F1 | AUC | MRR | | M2V | 25.74 | 50.25 | 80.68 | 88.97 | | HIN2Vec | 15.56 | 43.67 | 80.04 | 90.90 | | ConvE | 25.13 | 49.31 | 88.14 | 93.57 | | ComplEx | 20.25 | 49.43 | 84.01 | 91.46 | | RGCN | 15.37 | 45.86 | 82.75 | 91.52 | | HAN | 14.25 | 39.30 | 80.73 | 91.61 | | HGT | 19.97 | 47.99 | 81.94 | 89.65 | | HeCo | 23.95 | 48.62 | 79.32 | 87.40 | | SHGP | 13.83 | 39.07 | 78.37 | 85.52 | | LM(DRoBERTa) | 51.76 | 69.51 | 79.22 | 91.21 | | WalkLM | **55.01** | **71.36** | **92.11** | **96.54** | **References** [1] Heterogeneous network representation learning: A unified framework with survey and benchmark, TKDE, 2020. > **2:** The new data format and the real-world applicability. Our method does not introduce a new data format, but rather introduces the novel process of *textualization*, which converts general attributed graphs into text-like sequence data. This process allows us to leverage the capabilities of pre-trained language models for graph representation learning. Importantly, our method only requires some meaningful attributes on the graphs, which is available in most real-world graphs such as biological networks, social networks, and knowledge graphs. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: I acknowledged the author response. Thank you for addressing my questions. No further question is needed. --- Reply to Comment 1.1.1: Title: Thanks for the response Comment: We thank the reviewer for the response and will make sure to include the new results and discussions in our rebuttal into the final version.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Bypass Exponential Time Preprocessing: Fast Neural Network Training via Weight-Data Correlation Preprocessing
Accept (poster)
Summary: The paper presents a new preprocessing method for training shallow overparametrized sparse neural networks. It significantly improves the preprocessing time yet achieves same performance on query time. They also show that their algorithm is very close to optimal. Strengths: 1. Clearly written. Easy to understand. Well structured. 2. Story is rather complete. Lower bound is included showing that their result is close to optimal. 3. Connect ideas from applied algorithms (LSH etc.) to deep learning, insightful Weaknesses: 1. Not all assumptions appear in the statement of theorem 1.1. Expect a more detailed description of the sparsity assumption. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Does your work have applications in domains other than deep learning? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: 1. The setting is limited. Overparametrization is currently largely a theoretical assumption for the convenience of proving things, rarely used in practice. Shallow network is also too limited for characterizing deep learning. Also neural networks are commonly run on parallel machines like GPU with some complications for sparsity processing, so it's hard to say whether data structures would be really helpful. But it's very common, due to the nature of things, to have these limitations in a theoretical work, so it's not a serious flaw. Maybe it's better to design the algorithm for a general problem and put deep learning as a possible use case. They have addressed the limitation of their work in terms of shallowness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks so much for your great efforts and helpful comments. Please refer to the general response for the practicality concerns. Regarding the sparsity assumption, we note that we do not assume the activated neurons are sparse. But we use a result proved in [SYZ21] that as long as setting a unified threshold ($b=\sqrt{0.4\log m}$) to the ReLU functions, after the standard randomized initialization, the number of activated neurons in each training iteration will be $O(m^{4/5})$ with high probability (see Remark 2.5). We will state it more explicitly in Theorem 1.1 in the final version. Regarding other potential applications, our data structure actually solves the problem of dynamically finding pairs of points with large inner products, which is a very useful sub-routine for applications such as clustering, database searching, etc. And compared to the classical computational geometric approach (e.g. [AEM92]), our data structure does not suffer from the curse-of-dimensionality problem, which is a big advantage for dealing with high-dimensional data. --- Rebuttal Comment 1.1: Comment: Makes sense.
Summary: This paper analyzes a specific neural network setting: two-layer neural networks with $m$ neurons in the hidden layer and a ReLU activation. Given input data of dimension $d$ and $n$ training examples, it normally requires O(mnd) operations to compute the hidden activations. This paper follows prior work in showing that this can be asymptotically reduced to $O(m^{4/5}n^2d)$, which is better in the overparameterized (large m) regime. Improving on the previous work, the proposed algorithm requires polynomial instead of exponential time preprocessing. The core technical contribution is the Correlation Tree data structure, which is a collection of binary trees that store inner products between datapoints and neurons, and allows efficiently updating based on the sparsity of activated neurons. Strengths: - The paper analyzes an interesting theoretical setting, and provides an original data structure and algorithm for this problem of efficient computation. The main result that training with sublinear cost per iteration is novel. - The paper is well-structured and written well. The problem setting is clear and the contributions are clearly decribed. Overall it is high quality. Weaknesses: The main weakness of this submission is the empirical practicality of the method. The authors are transparent about these weaknesses, and it is not the intended focus of this direction of research, so I think this weakness does not significantly detract from the paper. It might be interesting to comment more on the technical challenges behind extending it to more complex settings (e.g. beyond 2 layer, or other activation functions). Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: N/A Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: The authors discuss most of the prominent limitations, but I think it is not completely clear how practical the algorithm is. It seems like it should be relatively straightforward to implement, which could strengthen the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks so much for your helpful suggestions! Please refer to our general response for your concern about the practicality of the algorithm. Regarding the generalization to more complex settings, we think it is possible that our data structure will work for other activation functions. Roughly speaking, as long as the activated neurons are sparse or approximately sparse, our data structures will be able to theoretically reduce the cost-per-iteration. However, we need to re-prove the sparsification results in [SYZ21] for the activation function other than ReLU. We believe this is a very interesting direction for further research. For multi-layer neural network training, a natural barrier is $O(m^2nd)$-per iteration (assuming each layer has $m$ neurons and the number of layers is constant). [SZZ21] proposed an algorithm that runs in sub-quadratic time with a provable convergence guarantee. Their algorithm does not use the half-space reporting or tree-based data structure like ours or [SYZ21] but employs some other techniques like tensor sketching to speed up matrix-related computations, which are bottlenecks only appearing in the multi-layer setting. Since they showed that the ReLU sparsifier will also work even beyond 2-layer networks, our tree-based data structure could be applied to this setting. However, our lower bound for Dynamic Detection of Firing Neurons (Theorem 1.4) with $n=m$ may prevent us from surpassing the quadratic barrier. We will add some discussions about this point in the final version. Reference: [SZZ21] Song, Zhao, Lichen Zhang, and Ruizhe Zhang. "Training multi-layer over-parametrized neural network in subquadratic time." arXiv preprint arXiv:2112.07628 (2021). --- Rebuttal Comment 1.1: Comment: Thanks for the response. As mentioned in the original review, I think that the practicality concerns about the algorithm are not major given the theoretical focus of the work on asymptotically faster algorithms in certain regimes. I think adding some brief discussion about the extended directions would discussed in the rebuttal would help the paper. Overall, I think the originality and novelty of the submission is good, and the significance is low for the practical ML community but moderate-high for the theoretical ML community. Overall I am keeping my recommendation of acceptance.
Summary: In the paper, the authors proposed fast optimization algorithm for over-parameterized two-layer networks. They proved that by using the sparsity firing feature from the neural network, the proposed method requires only O(nmd) time in preprocessing and still achieves o(nmd) time per iteration. Strengths: 1. A thorough theoretical analysis is provided and prove the statement of the paper. Weaknesses: 1. The proposed method require O(nm) space for storing, which is a lot when n and m is big. 2. No experimental results shown in the paper. A toy experiment can show empirical impact of this work. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The space required by the proposed method is high, 2. Is there any possibility to do a toy experiment? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. The space required by the proposed method is high. 2. There are few works training a two-layer network and showed competitive performance with deep NN. It is non-trivial to simplify the problem and prove the convergence of NN. However, why we need to do training based on an un-empirical structure? I hope at least a tory experiment should be provided to show that this training method is viable in deep NN. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks so much for your great questions. Please refer to our general response for your concern about the experiments. And we agree with the referee that our data structure requires $O(mn)$ space. However, we would like to highlight that the primary objective of our research is to study the neural network training problem from a theoretical perspective. And in this work, we focus on optimizing the time complexity while preserving the convergence guarantee. As a trade-off, we need to use more memory. We believe it is a very interesting open question to develop a training algorithm that theoretically reduces space complexity. It is also interesting to investigate the time-space trade-off for neural network training algorithms. We leave these as open questions for future study. --- Rebuttal Comment 1.1: Comment: I have read reviews and rebuttals. Will update to weakly accept.
Summary: This paper investigate the efficient training methods than the usual training protocol which requires the complexity $O(nmd)$ for 2-Layer ReLU networks. The authors improve the complexity in the previous study [SYZ21] by proposing the preprocessing method utilizing the tree data structure for both data and weights. Moreover, the authors successfully provide the upper bound/lower bound (for lb, the authors assume some conjecture) for their proposed preprocessing complexity. Unlike in previous study [SYZ21], this paper is the first presentation for the lower bound. Strengths: 1. The authors efficiently improve the preprocessing time from exponential complexity $O(2^d)$ (in previous research [SYZ21]) to polynomial $O(nmd)$ (in this paper) for both data and weight parameters using the tree data structures which is popular in general computer science. 2. For the tree data structure preprocessing, both the upper bound/lower bound are provided in a solid theory under the NTK regime and the theory basically depends on the previous study [SYZ21]. 3. Unlike in previous study [SYZ21], the authors also provide the lower bound for their proposed method (although assume some conjecture), so the tree-based preprocessing method is nearly optimal. Weaknesses: Actually, I'm not an expert in this field, but I have carefully read this paper along with the previous study [SYZ21]. Here are my main concerns: 1. Based on my understanding, the authors have successfully reduced the complexity of preprocessing from $O(2^d)$ to $O(nmd)$ and from $O(n^d)$ to $O(nmd)$. However, it seems that the per-iteration time in this paper (for example, in Theorem 4.1, the time per-iteration is $O(m^{4/5}n^2d)$, but it is $O(m^{4/5} n d)$ in previous study [SYZ21]) has actually increased compared to the previous study [SYZ21]. In fact, if preprocessing time constitutes a significant portion of the neural network's training process, this research would have more significance. Therefore, it seems that an (empirical) analysis of the portion that preprocessing takes in neural network training, in computational terms, is necessary. 2. As this study presents more advanced preprocessing techniques compared to the previous research [SYZ21], there should be experimental analysis on how much the actual preprocessing time is reduced and its impact on per-iteration time (under quite theoretical settings or even for synthetic data/architecture). This analysis seems to be necessary, even for simple neural network models and simple datasets, to further validate the improvements made by the proposed preprocessing methods. 3. In the theoretical perspective, it is necessary to provide remarks on what is the challenging points of the theory in this paper compared to the previous study [SYZ21]. When I was examining the supplementary material, it is not clear which aspects have significantly changed compared to the theoretical analysis in the previous study [SYZ21]. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Please see the weaknesses part. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks so much for taking the time to read and understand our paper and for your helpful suggestions. Please refer to our general response for your concerns about the empirical analysis and the comparison to [SYZ21]. In the final version, we will add a remark to compare our techniques to [SYZ21]. --- Rebuttal Comment 1.1: Title: Response to the authors Comment: I appreciate the authors for their response. I agree that the contribution of this paper focus on theoretical results, but I still think that the empirical studies even on toy architecture should be needed to insist the efficiency compared to regular network training. In this sense, the authors could not resolve my concern on empirical study, hence I decided to keep my original score.
Rebuttal 1: Rebuttal: ## General response: We thank all the referees for the valuable comments. Here, we give general responses to some common questions. First, regarding the concerns on empirical practicality, we want to kindly emphasize that our purpose is to present a training algorithm that is both efficient and accurate, supported by solid theoretical guarantees. For comparison, numerous recent works focused on empirically accelerating the training procedure (e.g., MONGOOSE [CLP+20], etc.). They used some locality-sensitive hashing (LSH)-based data structure to *approximately* find the neurons with large outputs. The impressive experiments in those works show that the neural network training time can be significantly reduced while achieving similar accuracies. However, it remains to be a big challenge to offer any performance guarantee for those methods. Our work and [SYZ21] seek to bridge the gap and develop more theoretical insights into fast neural network training. The algorithm proposed in [SYZ21] provides a provable convergence guarantee but falls short in terms of efficiency due to the slow preprocessing. In contrast, our algorithm not only provides a theoretical performance guarantee but also mitigates the curse-of-dimensionality issue in [SYZ21]. Thus, the contribution of this paper is more theoretical, and we acknowledge that our algorithm will not be as fast as the previous empirical methods. Furthermore, we note that it is quite difficult to empirically gauge the preprocessing time in [SYZ21] since the half-space reporting (HSR)-data structure in [SYZ21] is an extremely complicated computational geometric data structure developed by [AEM92]. Even if we can implement this data structure, the exponential dependence in $d$ restricts its usage to very tiny datasets. Second, we would like to draw a comparison between our techniques to those developed in [SYZ21]. Our work consists of three technical components: a) designing the correlation-tree data structure to retrieve the activated neurons, b) proving the complexity and correctness of our algorithms, and c) showing the nearly-optimality of our data structure. The most important contribution of this work is in Part a), which exponentially improves the running time of the data structure in [SYZ21] in terms of $d$. Notably, unlike [SYZ21], we do not use any “black-magic” in previous work, and our data structures are simple and clean (see Algorithms 1 and 2 in our paper). As we discussed in Section 5, they can be easily parallelized to reduce the cost-per-iteration to $O(m^{4/5}nd)$. Part b) relies on the ReLU-sparsifier developed in [SYZ21], which demonstrated that by adding a unified threshold to the ReLU functions, the activated neurons in each iteration will be sparse while the training dynamics will still converge to the zero training loss. This insight helps us establish the performance guarantees for our training algorithm. Part c) is another novel contribution of this work. We are the first to develop a *non-trivial* fine-grained complexity result for a key sub-problem in neural network training, namely the dynamic detection of activated neurons. This result shows that even for a nearly-constant dimension, it is still hard to construct a data structure with sublinear-time per update and subquadratic-time per query, implying that our data structure is nearly-optimal in the computational-theoretic sense. Reference: [AEM92] Agarwal, Pankaj K., David Eppstein, and Jirí Matousek. "Dynamic half-space reporting, geometric optimization, and minimum spanning trees." Annual Symposium on Foundations of Computer Science. Vol. 33. IEEE Computer Society Press, 1992. [CLP+20] Chen, Beidi, et al. "Mongoose: A learnable lsh framework for efficient neural network training." International Conference on Learning Representations. 2020. [SYZ21] Song, Zhao, Shuo Yang, and Ruizhe Zhang. "Does preprocessing help training over-parameterized neural networks?." Advances in Neural Information Processing Systems 34 (2021): 22890-22904.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
SE(3) Diffusion Model-based Point Cloud Registration for Robust 6D Object Pose Estimation
Accept (poster)
Summary: The paper tackles model-based 6D object pose estimation using SE(3) diffusion model-based point cloud registration. Point cloud registration is trained with diffusion and denoising processes on SE(3), which gradually perturb the optimal pose and learn a denoising network to refine the noisy transformation progressively. The SE(3) optimization objective is derived from a 3D registration-specific variational lower bound and the denoising network is trained with a surrogate registration model. The perturbation and interpolation on SE(3) are performed with Lie algebra. Experiments on TUD-L, LINEMOD, and Occluded-LINEMOD datasets show the validity of the method. Strengths: 1) The paper reformulates 3D point cloud registration as a diffusion and denoising framework, and the results validate the effectiveness of the framework. 2) The paper is fairly well-written and easy to follow. Weaknesses: 1) The title of the paper says the method is "robust". However, it seems there is a dilemma in sample diversity and sample efficiency for the proposed method. That means a fixed set of hyper-parameters might not work well for different kinds of datasets. In other words, the method is not as robust as claimed in the title. 2) Only mAP is used for evaluating 6D pose estimation, other popular evaluation metrics such as ADD and the BOP metrics should be considered for a more comprehensive understanding of the method's performance. 3) [a] also learns a SE(3) diffusion model. Maybe it should be discussed as a related work. [a] SE(3)-DiffusionFields: Learning smooth cost functions for joint grasp and motion optimization through diffusion. ICRA 2023. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) Could the authors provide the training time cost comparison of the proposed method and the compared methods? Does it require significantly more time for training compared to existing methods? 2) How about performing the interpolation and perturbation of the transformation on so(3) + R^3 (which is also a linear vector space) instead of se(3)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: The title of the paper says the method is "robust". However, it seems there is a dilemma in sample diversity and sample efficiency for the proposed method. That means a fixed set of hyper-parameters might not work well for different kinds of datasets. In other words, the method is not as robust as claimed in the title. **A1**: Thank you for your insightful comments and constructive feedback. The “robust” claimed in the title mainly refers to that our SE(3) diffusion registration models can present excellent estimation robustness on real-world pose estimation challenges (e.g., full-range transformation and severe occlusions). Additionally, Table 2 shows that a fixed set of hyper-parameter settings can consistently bring significant performance gain across different datasets, which strongly supports the algorithm robustness of our method. **Q2**: Only mAP is used for evaluating 6D pose estimation, other popular evaluation metrics such as ADD and the BOP metrics should be considered for a more comprehensive understanding. **A2**: Thanks for your suggestion. We will add more evaluation metrics (such as ADD and the BOP metrics) in our revised version. In this rebuttal response, we give the results of methods DCP, Diff-DCP, RPMNet and Diff-RPMNet over ADD, and four BOP metrics (MSSD, AD, ADI, PROJ) on LINEMOD benchmark as below. It can be observed that our SE(3) diffusion registration methods (Diff-DCP and Diff-RPMNet) can consistently outperform their baseline methods (DCP and RPMNet) by a large margin on all metrics. We will add these comparison results in our revised version. | | ADD | MSSD | AD | ADI | PROJ | |-----|---|---|---|---|---| | DCP | 0.226 | 0.378| 0.430| 0.893| 0.021 | Diff-DCP | 0.640| 0.739| 0.801| 0.967| 0.340 | RPMNet | 0.341| 0.417| 0.521| 0.897| 0.071 | Diff-RPMNet| 0.609| 0.710| 0.779| 0.959| 0.267 **Q3**: [a] also learns a SE(3) diffusion model. Maybe it should be discussed as a related work. **A3**: Thanks for pointing it out. [a] proposes an SE(3) diffusion model for robotic grasping tasks and presents promising results. Since it’s a concurrent paper with us, we didn’t notice and discuss it in our original paper. We will discuss it as a related work in our revised version. **Q4**: Training time cost comparison of the proposed method and the compared methods? Does it require significantly more time for training? **A4**: We hope to clarify that to ensure a fair comparison, we take the same training epochs as compared methods for performance evaluations. Particularly, during each epoch, given a training sample {X, M, H_0}, the compared methods directly use it for training, while our method generates one diffusion sample {X_t, M, H_t} for training (please refer to Alg. 1). Thus, their numbers of training samples utilized in each epoch are also identical. This indicates that our diffusion model shares a comparable training time with other methods and it doesn't require significantly more time for training. **Q5**: How about performing the interpolation and perturbation of the transformation on so(3) + R^3 instead of se(3)? **A5**: As shown in table below, compared to se(3) diffusion, so(3)+R^3 diffusion presents some performance degradation on TUD-L dataset. Their performance differences primarily stem from the disparity in quality of training samples generated by se(3) diffusion and so(3)+R^3 diffusion. Specifically, se(3) diffusion is demonstrated to generate smooth and shortest interpolation path (i.e., geodesic path) between two transformations, which can provide high-quality training samples to train the denoising network for a more reliable se(3) reverse process (Please refer to Alg. 1). Instead, so(3)+R^3 diffusion trace would generate non-smooth and unstable diffusion path, which would generate large amounts of low-quality training samples and thereby degrade the pose-correction ability of the trained denoising network. | | 5$^{\circ}$@mAP|10$^{\circ}$@mAP |1cm@mAP|2cm@mAP| |-----|---|---|---|---| | Diff-RPMNet w/ so(3)+R^3 | 0.63 |0.93|0.95|0.98 | Diff-RPMNet w/ se(3) | **0.90**|**0.98**|**0.98**|**0.99** --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications.
Summary: The authors propose an approach to address the problem of 6D pose estimation on real-world data based on a denoising diffusion process. They introduce a novel diffusion process on the SE(3) manifold, leveraging Lie algebra se(3) to shift the process from the linear Euclidean to the nonlinear SE(3) space. Furthermore, a registration-specific variational lower bound is derived as optimization objective for model-based learning. By reformulating the transformation following the posterior of the denoising matching term, the prior mean can be adapted to incorporate a surrogate registration model, enabling the application of existing deep registration models. The experimental results show that not all registration models can deal with real-world data and emphasize the significant effectiveness of the novel SE(3) diffusion process. Strengths: Originality: A novel diffusion process on the nonlinear SE(3) space for point cloud registration is introduced. Quality: The paper is well written in terms of language and organization, and a solid mathematical foundation for the proposed diffusion process is provided. The extensive experiments highlight the superior performance of the approach compared to the respective baselines and a variety of other related work. The ablation studies demonstrate the impact of the approach and the influence of different configurations on multiple datasets. Clarity: Despite the emphasis on the mathematical background, all mathematical derivations are explained concisely and comprehensibly. The figure of the framework overview and the provided pseudo codes of the training and inference algorithms allowing to grasp the underlying concepts easily. Significance: The novel denoising diffusion process significantly outperforms the baselines on real-world data, showing the effectiveness of the approach compared to related work. Weaknesses: I could not find any major weaknesses. The authors: - Explain the method well - Conduct extensive experiments and compare their approach to multiple related works on different datasets (only real-world datasets, but this is intended) - Provide quantitative and qualitative results and analyzed which part of their approach contribute to the improvements - Include ablation studies of the diffusion process (forward & backward) to further demonstrate the impact of their approach - The Appendix contains the mathematical derivations in detail, additional visualization, and a section on broader impact, limitations and future work Technical Quality: 3 good Clarity: 3 good Questions for Authors: Where do the evaluation results for the related methods in Table 1 come from? Did you evaluate the methods yourself? If so, which code base did you use, or where did you find these results? Most methods do not provide them in their papers. Why are only the qualitative results for RPMNet listed in the Appendix and not also for DCP? Since DCP was used in the ablation studies, it would also be interesting to see some visual results of it. Appendix A.1 $\rightarrow$ equation 5 $\rightarrow$ penultimate term: distribution $p$ is used in the numerator, but should it not be distribution $q$? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper addresses the broader implications in terms of societal and academic impact. Moreover, it discusses the limitations of their work and indicated possible future directions based on these limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: Where do the evaluation results for the related methods in Table 1 come from? Did you evaluate the methods yourself? If so, which code base did you use, or where did you find these results? Most methods do not provide them in their papers. **A1**: Thanks for your positive score and encouraging comments. Due to the unavailability of source code for MN-DCP and MN-IDAM, we reference their reported experimental outcomes from the original papers for performance evaluation. The remaining results such as FMR, RGM, RIENet, DCP, and RPMNet are evaluated by ourselves. To ensure a fair comparison, we employ their official codes for implementation and meticulously fine-tune them to enhance their performance on real-world 6D object pose estimation datasets. **Q2**: Why are only the qualitative results for RPMNet listed in the Appendix and not also for DCP? Since DCP was used in the ablation studies, it would also be interesting to see some visual results of it. **A2**: Thanks for your suggestion. We have presented some qualitative comparisons of DCP and Diff-DCP on TUD-L, LINEMOD and Occluded-LINEMOD datasets in Fig.1 of the uploaded rebuttal material (Please refer to the attached PDF file). In the future, following your suggestion, we will add these visualization results into our qualitative evaluations in our revised version. **Q3**: In equation 5 of Appendix A.1, distribution p is used in the numerator, but should it not be distribution q? **A3**: Thanks for pointing this typo out. We will correct it by replacing "p" with "q" in our revised paper. --- Rebuttal Comment 1.1: Comment: Thanks for those clarifications.
Summary: This paper introduces a point cloud alignment method based on a diffusion model on SE(3). For this, the forward and reverse diffusion processes are performed in the lie group se(3). The method is evaluated on challenging real datasets, showing significant improvements over its baselines. Strengths: Relevant Application and Challenging Method: The paper addresses the highly relevant topic of 6DoF pose estimation using diffusion models, which is an area of significant interest. Applying diffusion models to complex parameter spaces like SE(3) poses a challenging problem that has yet to be fully resolved. Experimental setup: The method is evaluated on challenging real datasets that are often used in the literature regarding 6DoF pose estimation and refinement. The comparison with a significant number of existing methods is fair and comprehensive. Multiple scenarios and thresholds are evaluated, allowing for a better insight into the strengths and weaknesses of the different methods. Results: The results validate the method, showing significant improvements over the baseline methods. Several ablation studies allow for insights into the method's parameterizations. Method: The method can be used with different backbones, increasing its impact as an add-on method that potentially strengthens any existing baseline. Writing: The paper is well written and guides the reader through the traditionally notation-heavy math of the diffusion model. The ideas are well motivated and explained. **Rebuttal** The reviewer appreciates the clarifications and additional experiments provided in the rebuttal and maintains the initial rating. Weaknesses: Some related work on diffusion models over SE(3) / SO(3) could be cited. Examples include: [R2] Leach et al., Denoising diffusion probabilistic models on so(3) for rotational alignment, ICLRW 2022. [R3] Urain et al., SE(3)-DiffusionFields: Learning smooth cost functions for joint grasp and motion optimization through diffusion, ICRA 2022. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Q1: 179: There are several different ways for interpolating rotations; I've often seen quaternions used for this. Is there a motivation for using the exponential map over other representations? Similarly for the perturbations. Q2: 268: How are 512 points sampled if the objects are smaller than that (especially in real scenes)? (Suggestion for future work, no effect on rating, no need to respond): The authors might want to look at SYMSOL [R1], data synthetic dataset especially for dealing with pose ambiguities. [R1] Murphy, Kieran A., et al. "Implicit-PDF: Non-Parametric Representation of Probability Distributions on the Rotation Manifold.", ICML 2021 Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Limitations and impact are adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: 179: There are several different ways for interpolating rotations; I've often seen quaternions used for this. Is there a motivation for using the exponential map over other representations? Similarly for the perturbations. **A1**: Thanks for your valuable and positive comments. (1) There are indeed some methods for interpolating 3DoF rotations, such as using quaternion or matrix representations. However, our task needs to employ more complex 6DoF transformation interpolation. In this context, the utilization of the 6DoF exponential map for transformation interpolation has been demonstrated to generate smooth and shortest interpolation path (i.e., geodesic path) between two transformations. Instead, decoupling 6DoF transformation interpolation into separate 3DoF rotation interpolation and 3DoF translation interpolation would suffer from non-smooth, unstable interpolation traces, resulting in the low-quality training sample for denoising-network training (see Alg. 1). The experimental results on TUD-L dataset in table below also empirically validates that using 6DoF exponential map for transformation interpolation can achieve higher estimation precisions than the 3DoF quaternion+3DoF translation interpolation. Therefore, we prefer using the exponential map over other representations for transformation interpolation. (2) For 6DoF transformation perturbations, both 6DoF exponential-map perturbation and 3DoF quaternion+3DoF translation perturbation are valid options. In our paper, to maintain content coherence, we still employ the exponential map representation for transformation perturbation. | | 5$^{\circ}$@mAP|10$^{\circ}$@mAP |1cm@mAP|2cm@mAP| |-----|---|---|---|---| | Diff-RPMNet w/ 3DoF quaternion+3DoF translation | 0.63 |0.93|0.95|0.98 | Diff-RPMNet w/ 6DoF exponential map | **0.90**|**0.98**|**0.98**|**0.99** **Q2**: 268: How are 512 points sampled if the objects are smaller than that (especially in real scenes)? **A2**: If the number of scanned object points is less than 512, some points will be sampled repeatedly to meet the required point count. For example, if we want to sample a point set of size 5 from 3 points {p1, p2, p3}, the resulting sampled points would be {p1, p1, p2, p2, p3}, where points {p1, p2} are sampled repeatedly. **Q3**: Some related work on diffusion models over SE(3) / SO(3) could be cited. **A3**: Thanks for pointing them out. We will incorporate them into our related work section for a thorough discussion in the revised version. --- Rebuttal Comment 1.1: Comment: Thank you for the additional details and clarifications.
Summary: This paper proposes a SE(3) diffusion model-based point cloud registration framework for robust 6D object pose estimation, which formulates point cloud registration as a denoising diffusion process and enables progressive refinement of the transformation between the source point cloud and the model point cloud. Experiments are conducted on TUD-L, LINEMOD, and Occluded-LINEMOD datasets, and state-of-the-art results are achieved on these datasets. Strengths: 1. The paper is overall well written and easy to follow. 2. Experiments are conducted on three different datasets and comparisons are done to different methods. High performance is achieved. Weaknesses: 1. Since this paper aims for 6D object pose estimation, the compared methods should mainly focus on 6D object pose estimation methods obviously, not just some point cloud registration methods. 2. The proposed method is trained and tested on 6D object pose estimation datasets, However, the compared methods are point cloud registration methods. Did the authors retrain these learning-based point cloud registration models on the datasets in order to fairly compare with these learning-based methods? I am concerned that this is the key to the improved performance. 3. The comparisons are done to a few methods in the current manuscript, I suggest more state-of-the-art methods should be included for comparison, e.g., point cloud registration methods (Predator[1], Cofinet[2], Geotransformer[3], and so on) and 6D object pose estimation (Fs-net[4], Ove6d[5], Gpv-pose[6] and so on). [1] Shengyu Huang, Zan Gojcic, Mikhail Usvyatsov, Andreas Wieser, and Konrad Schindler. Predator: Registration of 3d point clouds with low overlap. In CVPR, 2021. [2] Hao Yu, Fu Li, Mahdi Saleh, Benjamin Busam, and Slobodan Ilic. Cofinet: Reliable coarse-to-fine correspondences for robust point cloud registration. NeurIPS, 2021. [3] Zheng Qin, Hao Yu, Changjian Wang, Yulan Guo, Yuxing Peng, and Kai Xu. Geometric transformer for fast and robust point cloud registration. In CVPR, 2022. [4] Wei Chen, Xi Jia, Hyung Jin Chang, Jinming Duan, Shen Linlin, and Ales Leonardis. Fs-net: Fast shape-based network for category-level 6d object pose estimation with decoupled rotation mechanism. In CVPR, 2021. [5] Dingding Cai, Janne Heikkila, and Esa Rahtu. Ove6d: Object viewpoint encoding for depth-based 6d object pose estimation. In CVPR, 2022. [6] Yan Di, Ruida Zhang, Zhiqiang Lou, Fabian Manhardt, Xiangyang Ji, Nassir Navab, and Federico Tombari. Gpv-pose: Category-level object pose estimation via geometry-guided point-wise voting. In CVPR, 2022. Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: See Weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 1 poor Presentation: 2 fair Contribution: 2 fair Limitations: The authors mention the limitation of the proposed method in supplementary material. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: Compared methods should focus on 6D object pose estimation methods obviously, not just point cloud registration methods. **A1**: Thanks for your diligent comments to help improve our work. Considering that the RGB(D)-based pose estimation methods would suffer from limited robustness to challenging lighting conditions (e.g., low-light conditions or color-varying light conditions), our work thus focuses on how to utilize geometry-rich point-cloud data (without any RGB information) to achieve lighting-robust, instance-level pose estimation. Based on this task setting, point cloud registration methods are the suitable and popular 6D pose estimation methods, where the orientation and position of the object can be effectively recovered by registering the scanned object point cloud to the model one. However, the conventional RGB(D)-based methods are unable to adapt to this setting due to the requirement for additional RGB information. Therefore, to ensure a fair comparison, following [R1], we compare our method with those registration-based pose estimation methods rather than conventional RGB(D)-based methods. In the future, following your suggestion, we will add some RGB(D)-based 6D pose estimation methods (e.g., your listed papers below) for performance comparisons in our revised version. [R1]: Learning-based Point Cloud Registration for 6D Object Pose Estimation in the Real World. ECCV’2022 **Q2**: Did the authors retrain these learning-based point cloud registration models on the datasets in order to fairly compare with these learning-based methods? I am concerned that this is the key to the improved performance. **A2**: To ensure a fair comparison, all compared learning-based point cloud registration models are retrained on the training data of 6D object pose estimation datasets for performance comparisons. Particularly, we employ their official codes for implementation and meticulously fine-tune them to enhance their performance on real-world 6D object pose estimation datasets. It should be noted that due to the unavailability of source code for MN-DCP and MN-IDAM, we reference their reported experimental outcomes from the original papers for performance comparisons. **Q3**: The comparisons are done to a few methods in the current manuscript, I suggest more state-of-the-art methods should be included for comparison. **A3**: Thanks for your suggestion. We will incorporate the methods you've mentioned for comparisons in our future version. Furthermore, we hope to clarify that the listed registration methods [1, 2, 3] primarily center around scene-level registration rather than object-level. Due to the huge scale difference between the scene and object point clouds, adapting their network architecture and hyperparameters to effectively extend them for object-level registration and object pose estimation is a non-trivial task. Hence, our initial paper version does not engage in a performance comparison with them. In the future, we will make our best effort to fine-tune their network architecture and hyperparameters to better suit object-level registration for pose estimation. Additionally, we note that among these methods, Predator [1] offers a set of hyperparameters specifically for object-level registration, and we retrain it on the TUD-L dataset for comparisons. The table below shows that our method can achieve significant performance advantage over Predator with a lower time cost. Moreover, considering that the RGB(D)-based instance-level pose estimation methods still suffer from limited robustness to challenging lighting conditions (e.g., low-light conditions or color-varying light conditions), our paper thus focuses on improving the lighting robustness of instance-level methods via point cloud registration. Hence, we mainly compare our method with the instance-level methods rather than the categorical methods as you listed. In our future version, we will include your suggested methods into our experimental comparisons. | | 5$^{\circ}$@mAP | 10$^{\circ}$@mAP | 1cm@mAP | 2cm@mAP | Times (Sec.) |-----|---|---|---|---|---| | Predator | 0.42| 0.71| 0.66| 0.81| 0.62 | 0.62 | Diff-DCP (ours) | 0.65 | 0.85 | 0.73| 0.94| **0.13** | Diff-RPMNet (ours) | **0.90**| **0.98**| **0.98**| **0.99**| 0.17 --- Rebuttal Comment 1.1: Comment: Thank you for the author's feedback. This paper is dedicated to the task of point cloud-based 6D object pose estimation, and there are so many depth-only-based 6D object pose estimation methods, e,g, OVE6D, StablePose, CloudAAE, CloudPose, and so on. I think it is necessary to include these methods in the comparisons. Otherwise, I worry that the proposed method is not convincing enough. I think a thorough comparison with existing methods is necessary to verify the effectiveness of the proposed methods. Furthermore, the listed point cloud registration methods also could perform well on object-centric datasets. Besides, more state-of-the-art object-level point cloud registration methods have emerged recently, and comparison with the latest state-of-the-art methods is necessary. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for reviewing and replying to our rebuttal. * Regression-based vs Matching-based Methods: We note that most previous depth-only-based methods (e.g., CloudAAE, CloudPose, and StablePose) on 6D pose estimation are based on direct pose **regression**, disregarding the utilization of object models for pose reference. As a result, these methods face limitations in their ability to generalize to unseen objects, primarily due to the absence of model-informed pose references. Although OVE6D exhibits the potential to generalize to unseen objects, it necessitates the additional encoding of new instances into the viewpoint codebook, which could potentially increase its application costs. By contrast, our method follows a drastically different research trajectory that leverages **3D matching** between the scanned object point cloud (PC) and the model PC for pose estimation. Therefore, its inherent model generalization ability in 3D matching potentially facilitates the pose estimation of unseen objects. To validate it, we choose objects with IDs=1~8 from LINEMOD to train our Diff-DCP and test its generalization performance on the unseen object with ID=9. Also, we directly generalize the Diff-DCP trained on LINEMOD to the TUD-L object with ID=3. We report the seen results vs unseen results in the table below. We observe that our method Diff-DCP can still achieve meaningful and encouraging estimation precisions on unseen objects without any additional model fine-tuning. | |LINEMOD (ID=9) | TUD-L (ID=3)| |:-----|:-----|:-----| | | mAP (5&deg;/10&deg;/1cm/2cm)|mAP (5&deg;/10&deg;/1cm/2cm)| Diff-DCP (seen) | 0.12/0.46/0.80/0.96 | 0.92/0.98/0.89/0.97| Diff-DCP (unseen) | 0.10/0.39/0.68/0.82 | 0.53/0.67/0.57/0.65| * Core Contribution to Matching-based Methods: As mentioned in lines 30-41, current object-level registration methods (e.g., MN-DCP in ECCV’2022) still suffer from limited matching robustness for pose estimation due to real-world challenges (e.g., full-range rotation and severe occlusion). It promotes us to develop an SE(3) diffusion registration model as a **plug-and-play** method to advance current registration methods for more reliable pose estimation. As a general framework, our method can integrate with different deep registration models to boost their performance (Ref: DCP vs Diff-DCP and RPMNet vs Diff-RPMNet in Table 1), thereby increasing our impact as an add-on method to enhance existing registration models. Theoretically, we can also include Predator, GeoTransformer and CoFiNet as our surrogate registration models (see Eq.12) into our framework for their performance boosting. We are highly interested to validate it in our future paper version. Also, we think the significant performance improvements achieved by Diff-DCP and Diff-RPMNet over their baselines (see Table 1) have sufficiently verified the effectiveness of our framework. * Summary: Different from most previous **regression**-based depth-only-based methods, our method is based on **3D matching**, enabling our method to enjoy the promising generalization ability to unseen object pose estimation. Furthermore, our SE(3) diffusion registration model is a general, **plug-and-play** framework. We can take different deep registration methods (such as the listed Predator and GeoTransformer) for their performance improvement. Finally, we regret that due to time constraints, we have to perform **a thorough comparison** with existing methods, e.g. depth-only-based methods and more point cloud registration methods, in our final paper version. Finally, we sincerely hope the reviewer will reconsider our paper for following aspects: (i) We pioneer the SE(3) diffusion registration model for robust 6D object pose estimation, with two innovative components: a transformation interpolation-based SE(3) diffusion process and a surrogate registration-driven SE(3) reverse process; (ii) In this context, we derive an effective registration-specific variational lower bound for model optimization (Detailed proof can be found in Appendix A.1); (iii) Our method is a general, plug-and-play framework, which can effectively integrate with different deep registration models to boost their performance, significantly increasing our impact as an add-on method to enhance existing registration models; (iv) Extensive experiments on real-world 6D object pose estimation datasets validate that our diffusion variants (Diff-DCP and Diff-RPMNet) consistently achieve significant performance improvements over their baseline models (DCP and RPMNet).
Rebuttal 1: Rebuttal: To address Q2 raised by Reviewer-NS5b, we have included some qualitative comparisons of DCP and Diff-DCP on TUD-L, LINEMOD, and Occluded-LINEMOD datasets in the attached PDF file. Pdf: /pdf/5d4797bfcf68cd7d63510a38236fbf0170ec1aee.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes and SE(3) diffusion model for pose estimation i.e. rotation R and translation t for the use case of registration two pointclouds. The paper addresses an important problem in 3D vision and which has applications in Robotics, AR/VR. The author claim that the convential diffusion models won't work for this task of SE(3) diffusion and proposes several changes to constaint the transformation transitions during the diffusion and reverse processes. Results are shown on competing and challenging baselines. Strengths: The technical contributions look sound and the author addresses an important and challenging problem in 3D vision. Adapting conventional diffusion models to SE(3) using a thorough mathematical formulation is convincing. Results show strong improvement on quantitative metrics. Weaknesses: 1. I wonder why the authors didn't compare or show experiments on more challenging categorical 6D pose and size estimation benchmarks nor discuss those in related works [1][2][3]. Is it a limitation of current work? This should at least be discussed in literature review for the work to have more merit since conventional instance-based pose estimation which assumes known CAD models has mostly been solved achieving higher accuracies. Might this be one of the reasons the author gets smaller improvements in some of the metrics? 2. Relating to my point above, the author only evaluates their approach on a single and very simpler benchmark. I understand this is what the authors are proposing i.e. point cloud registration but adding more benchmarks (within the point cloud registration works such as in the wild pose estimation etcetera) could further reinforce the author's results since it is hard to fully validate the efficacy of the results when evaluated on a single benchmark. Recently, the shift has been more toward the wild pose estimation[4] and zero-shot pose estimation and I wonder if this work can directly be useful to some of the other more complicated problems as well. 3. One of the pain points of traditional ICP is parameter tuning which can improve results although takes a lot of hand-tuning. Does the author's approach suffer from the same problem? like a number of denoising steps and etc. 4. Diffusion models are slow to train and infer. Can the authors comment on their inference speeds vs accuracy i.e. if they use a smaller denoising timestep? In essence, I saw no comparisons to state-of-the-art timing results for 6D pose estimation i.e. [2][3], or a direct discussion of it. Timing is crucial for robotics applications and the work's strong improvement might be downplayed if this is very slow and not real-time. [1] Normalized Object Coordinate Space for Category-Level 6D Object Pose and Size Estimation, Wang et al. [2] CenterSnap: Single-Shot Multi-Object 3D Shape Reconstruction and Categorical 6D Pose and Size Estimation Irshad et al. [3] ShAPO : Implicit Representations for Multi Object Shape Appearance and Pose Optimization, Irshad et al. [4] Category-Level 6D Object Pose Estimation in the Wild: A Semi-Supervised Learning Approach and A New Dataset, Fu et al. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the remarks in the weakness section. I am willing to improve my score if the authors address my concerns about 1. more complicated benchmarks i.e comparisons to categorical 6D pose and size estimation or in the wild pose estimation, discussion/comments about parameter tuning for the author's approach, discussion/comparison to timing i.e. speed vs accuracy tradeoff relation to some of the state of the art, fast 6D pose and size estimation approaches. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The author's work focuses on point cloud registration only and requires a depth map or depth sensor and wouldn't work in a monocular i.e. RGB only case Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**: Instance-based pose estimation has been solved? Why not compare categorical benchmarks? **A1**: Thanks for your valuable comments to help improve the quality of our paper. (1) Although the current RGB(D)-based instance-level pose estimation methods have achieved relatively good performance, they usually require the assumption of access to high-quality RGB images. As such, in more challenging lighting conditions (e.g., low-light conditions or color-varying light conditions), their instance-level estimation precisions would be significantly degraded. As shown in the table below, when we use the Gamma transformation (gamma=5,7,9) to darken the RGB image, the estimation precisions of the SOTA GDR-Net would degrade significantly. Thus, realizing lighting-robust, instance-level pose estimation is still an unresolved and meaningful task for wider applications of 6D pose estimation. (2) To alleviate the aforementioned dilemma, our paper thus focuses on how to utilize geometry-rich point-cloud data (without any RGB information) to achieve lighting-robust, instance-level pose estimation. The table below verifies that our method presents excellent robustness to challenging lighting conditions. However, as our method exploits the object-model alignment for instance-level estimation, the inherent intra-class shape variation in category-level estimation would confuse such alignment and thereby reduce the pose accuracy. As such, our current method mainly focuses on instance-level datasets for evaluation, rather than categorical datasets. In the future, following your suggestion, we will add discussions of papers [1, 2, 3] in our related work. Moreover, by integrating NOCS [R1] (normalized object coordinate space) into our model, the NOCS-enhanced SE(3) diffusion registration framework has the potential to address category-level estimation tasks. Specifically, in the surrogate registration model, we can take NOCS as the category-level model point cloud (PC), which can be used to construct category-level correspondences with the source PC for pose and scale estimations. This enables our SE(3) reverse process to gradually denoise the object pose and scale in a category-level setting. It's noted that our SE(3) diffusion process can directly adapt to category-level tasks without any modification. [R1] Normalized Object Coordinate Space for Category-Level 6D Object Pose and Size Estimation, CVPR’2019. | | 5$^{\circ}$@mAP|10$^{\circ}$@mAP |1cm@mAP|2cm@mAP| |-----|---|---|---|---| | GDR-Net (Gamma=5) | 0.10 |0.23|0.06|0.14 | GDR-Net (Gamma=7) | 0.02 |0.08|0.02|0.05 | GDR-Net (Gamma=9) | 0.01 |0.04|0.01|0.02 | Diff-RPMNet (ours) | 0.18|0.47|0.51|0.72 | Diff-DCP (ours) | **0.22**|**0.51**|**0.65**|**0.82** **Q2**: Only evaluate on a single and very simpler benchmark? Is our method useful for wild or zero-shot pose estimation? **A2**: (1) We hope to clarify that our method was not solely evaluated on a single and simpler benchmark. Instead, following the SOTA method [R2], we conducted extensive comparisons on three challenging and widely-used 6D estimation benchmarks (TUD-L, LINEMOD, and Occluded-LINEMOD). Although they are instance-level datasets, using pure point-cloud data to achieve lighting-robust, instance-level estimation is still an unresolved yet important problem for wider applications of 6D pose estimation. (2) It’s a highly interesting idea to extend our framework to wild/zero-shot pose estimation. Since our current model focuses on using 3D registration to realize lighting-robust, instance-level estimation, we need to make some modifications for enabling our model to handle wild/zero-shot estimations. Specifically, inspired by NOCS [R1], we can further exploit the vector quantization to compress the different instance models into a set of primitive keypoint representations. As such, these primitive representations can be viewed as the primitive model for constructing correspondences with the unseen object PC to estimate its pose and scale. [R2]: Learning-based Point Cloud Registration for 6D Object Pose Estimation in the Real World. ECCV’2022 **Q3**: Whether our method suffer from a lot of hand-tuning, such as the number of denoising steps, like in ICP? **A3**: Our framework can achieve good performance without a lot of hand-tuning like ICP. In contrast, our ablation studies in Table 2 show that different configurations can consistently bring significant performance gains over the baseline. Particularly, for the number of denoising steps, Fig.3 shows that more steps bring lower errors across all datasets. These ablation results effectively validate the robustness of our method to these hyperparameters, and indicate that our method can be free from heavy hand-tuning like ICP for achieving good performance. **Q4**: Diffusion models are slow to train and infer? Comment on their inference speeds vs accuracy? **A4**: (1) For training time, to ensure a fair comparison, we take the same training epochs as compared methods for model training. Particularly, during each epoch, given a training sample {X, M, H_0}, the compared methods directly use it for training, while our method generates one diffusion sample {X_t, M, H_t} for training (please refer to Alg. 1). Thus, their numbers of training samples utilized in each epoch is identical. This indicates that our diffusion model shares a comparable training time with other methods and thus our diffusion model is not slow to train. (2) For inference time, Fig.3 shows that it depends on the number of denoising steps, and a smaller timestep would accelerate the inference speed while decreasing the accuracy. For a good balance, we set denoising steps to 5, which can achieve excellent precisions with tolerable speed (~0.17s). In the future, as discussed in our future work, we will convert our SE(3) diffusion registration from the point-cloud space to the compact feature space for increasing our inference speed.
null
null
null
null
null
null
Time Series as Images: Vision Transformer for Irregularly Sampled Time Series
Accept (poster)
Summary: This paper explores a very interesting direction to represent time series as plotted images and then stack vision transformers to perform representation learning. Following this idea, this paper has conducted empirical studies on some classification benchmarks of both irregularly sampled time series and regularly sampled ones. By leveraging this new data representation, the proposed method has obtained the state-of-the-art performance on three irregular time-series benchmarks and also produced competitive results on multiple regular time-series dataests. Strengths: In my view, the biggest strength of this paper is the first successful demonstration of applying image-driven methods on representation learning of irregularly sampled time series. While after reading the related work section, I have also learned that this paper is not the first to represent time series as real images in general. Neverthess, I think the successful demonstration of image-based time-series representation learning on time-series classification tasks is interesting and has sufficient novelty. Weaknesses: However, I do have several concerns on the experimental results at the current stage. - Given the relatively high performance variations when employing different plotting configurations and specific early stopping epochs on different datasets, I would like to know the detailed hyper-parameter tuning procedure when you develop ViTST. Are you tuning these options directly based on the feedback from the test set? If not, can you elaborate on how to divide the validation part and please report the details of performance on both validation and test sets if possible. - This paper includes some ablation tests, but I think the existing results are far from enough to convince me about the robustness of ViTST. For example, as table 3 shows, this method is sensitive to the existences of interpolation, markers, colors. I want to know that whether ViTST is robust to 1) different line styles and widths given the existence of interpolation, 2) different marker sizes and types given the existence of markers, and 3) different color permutations assigned to multivariate time series. My central confusions lie in which aspects play indispenable roles in making ViTST work and whether ViTST is robust to those irrelevant options. Moreover, I would like to see more ablation results about grid layouts and image sizes. Tables 2, 3, 4 only cover a few, and there seem to large performance variations, too. - Another question bothering me is that why ViTST can obtain the best performance for irregularly sampled time series but is only roughly comparable to existing solutions for regular time series. - I have tried the code but cannot reproduce the results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the weakness part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The major limitation is that we are not clear many other critical factors ensuring the success of ViTST and whether it is robust to different plotting setups. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude for the valuable feedback and constructive comments. Please find our response addressing the concerns below. **Response to W1: Detailed hyper-parameter tuning procedure and Robustness test** The training/validation/test sets are randomly split. Initially, our selections for line style, line width, marker size, and colors were based on heuristic methods, with the goal of clearly illustrating patterns in human observation (on the training set). Following this, we adjusted the hyper-parameters based on the performance observed on the validation set. The performance of the validation and test set along training process on the P12 dataset is as follows: | Steps | Training loss | Validation AUROC | Validation AUPRC | Test AUROC | Test AUPRC | | --- | --- | --- | --- | --- | --- | | 40 | 0.35 | 0.6988 | 0.269 | 0.6502 | 0.2098 | | 80 | 0.5951 | 0.7843 | 0.373 | 0.7953 | 0.3698 | | 120 | 0.5066 | 0.8124 | 0.3915 | 0.8286 | 0.4113 | | 160 | 0.4837 | 0.8252 | 0.4091 | 0.8409 | 0.4686 | | 200 | 0.4847 | 0.8304 | 0.4471 | 0.8531| 0.4857 | | 240 | 0.4395 | 0.8336 | 0.4569 | 0.8574 | 0.4781 | | 280 | 0.4151 | **0.8357** | 0.4502 | 0.8578 | 0.4881 | | 320 | 0.4017 | 0.8344 | 0.4443 | 0.8558 | 0.4882 | ... We choose the checkpoint that achieves the best AUROC score on the validation set for testing. The robustness study on choices of image creations are shown below: **Color:** We manually assigned colors, with the principle of making adjacent line graphs have distinguishable colors. For comparison, we randomly selected colors over 3 times. The results are provided below. Our manual choices weren't optimal for the P19 dataset; random color selections yielded superior results, boosting the AUPRC score to 54.5. The PAM dataset, however, remained less affected by changes in line color. | P19 | AUROC | AUPRC | | --- | --- | --- | | Default | 89.4$\pm$1.9 | 52.8$\pm$3.8 | | Random color 1 | 89.3$\pm$1.4 | 53.6$\pm$1.9 | | Random color 2 | 89.1$\pm$1.9 | 54.5$\pm$3.5 | | Random color 3 | 88.9$\pm$2.1 | 52.9$\pm$2.7 | | P12 | AUROC | AUPRC | | --- | --- | --- | | Default | 85.6$\pm$1.1 | 49.8$\pm$2.5 | | Random color 1 | 85.0$\pm$1.6 | 49.3$\pm$3.7 | | Random color 2 | 83.9$\pm$1.5 | 48.6$\pm$2.7 | | Random color 3 | 85.2$\pm$1.9 | 49.2$\pm$3.2 | | PAM | Accuracy | Precision | Recall | F1 Score | | --- | --- | --- | --- | --- | | Default | 96.1$\pm$0.7 | 96.8$\pm$1.1 | 96.5$\pm$0.7 | 96.6$\pm$0.9 | | Random color 1 | 94.5$\pm$1.3 | 95.8$\pm$0.8 | 95.0$\pm$1.2 | 95.3$\pm$1.0 | | Random color 2 | 95.2$\pm$1.5 | 96.5$\pm$1.1 | 95.2$\pm$1.4 | 95.8$\pm$1.3 | | Random color 3 | 95.4$\pm$1.7 | 96.6$\pm$0.9 | 95.9$\pm$1.5 | 95.9$\pm$1.1 | | Random order 1 | 95.5$\pm$1.0 | 96.8$\pm$0.8 | 95.7$\pm$1.0 | 96.2$\pm$0.9 | | Random order 2 | 95.2$\pm$0.7 | 96.6$\pm$0.6 | 95.6$\pm$0.8 | 96.0$\pm$0.7 | | Random order 3 | 95.4$\pm$0.8 | 96.6$\pm$0.7 | 95.8$\pm$1.1 | 96.1$\pm$0.7 | **Line width/style and marker size/style:** We configured line style to solid, and marker as '*'. For the P12 and P19 datasets, the line width and the marker size are set to 1 and 2, while for the PAM dataset, they are set to 0.5 and 0.5. The guiding principle is to optimize the line graphs for human visualization, ensuring that the lines effectively show trends and markers clearly indicate observed data without obscuring the lines. We also tested the other configuration. Due to limited space, we only show the results on P19 here. | Line style | Line width | Marker | Marker size | AUROC | AUPRC | | --- | --- | --- | --- | --- | --- | | Solid | 1 | * | 2 | 89.4$\pm$1.9 | 52.8$\pm$3.8 | | Dotted | 1 | * | 2 | 88.7$\pm$2.0 | 53.3$\pm$3.4 | | Dashed | 1 | * | 2 | 88.5$\pm$2.8 | 53.3$\pm$4.2 | | Solid | 2 | * | 2 | 88.5$\pm$2.3 | 53.6$\pm$3.4 | | Solid | 3 | * | 2 | 88.9$\pm$1.9 | 52.8$\pm$3.2 | | Solid | 1 | o | 2 | 88.8$\pm$2.6 | 53.2$\pm$3.8 | | Solid | 1 | ^ | 2 | 88.8$\pm$2.2 | 53.1$\pm$4.0 | | Solid | 1 | * | 1 | 88.6$\pm$2.4 | 52.6$\pm$3.6 | | Solid | 1 | * | 3 | 89.2$\pm$2.1 | 53.1$\pm$4.2 | From the result, we can see that the line style has a significant impact on the performance, whereas markers appear to have minimal impact. This observation is consistent with our initial heuristic that a solid line would likely be more effective, and markers might not make big influence. As for other factors, an optimal combination of line and marker could potentially enhance visualization, thereby improving performance. We did not conduct comprehensive hyperparameter tuning for these variables. As such, it seems that the peak performance on P19 has not yet been fully explored. **Response to Q: why ViTST can obtain the best performance for irregularly sampled time series but is only roughly comparable to existing solutions for regular time series?** As shown in Table 4, our ViTST's average accuracy is only behind TST, which is a transformer that directly operates on numerical time series. Given the shared architecture between ViTST and TST, we assume their performance gap might be attributed to the fact that extracting patterns from complete fixed-sized numerical time series data is more direct and introduces less noise than operating on the image data transformed from time series. However, when there is missing data and imputation is required to convert the data into fixed-sized input can introduce much noise, leading to suboptimal performance of standard numerical-based specialized methods. However, our method might not be such sensitive to the missing part in the line graph. By simple linear interpolation, the model might be able to approximate the pattern and trend from the line graph, ensuring that ViTST remains effective even with irregularly sampled data. --- Rebuttal Comment 1.1: Title: Thanks for the reply Comment: I think this paper is interesting so I try to reproduce your results with your code. However, I find some crucial issues and would like to hear your answers. - When deciding how to plot time-series images, especially the y-axis ranges, your code seems to use all data (including both train and test), which is improper because this step is just like doing normalization for time series, you should only use the training data to determine these hyper-parameters. - It is hard for me to reproduce your results reported in the paper. I mainly follow your provided code to run the experiments. Here is what I have obtained on P12 using ViT. Accuracy = 71.8 +/- 9.3 AUPRC = 16.1 +/- 1.2 AUROC = 54.0 +/- 2.1 Precision = 20.0 +/- 9.0 Recall = 22.6 +/- 13.6 Would you like to provide detailed instructions for me to replicate your results? I think tuning such a vision model must be very tricky. So, I change the score to 'Reject' unless I can reproduce the results. --- Reply to Comment 1.1.1: Title: Response to the reviewer's reply Comment: Thank you for your interest in our work. We value your feedback and would like to address your concerns as follows: Our setup resembles the scenario where the possible value range of variables is predetermined or well-understood. In cases where it isn't, the range can be derived when the test set features (not labels) are available in practice. Furthermore, with an ample amount of training data, the disparity between the distribution of the training set and the entire dataset is marginal., when there's a sufficient volume of training data, the disparity between the distribution of the training set and the actual distribution (represented by the whole dataset in the experiments) is minimal. Taking your feedback into account, we also explored rescaling based on the training set's distribution. The results are as follows: | P19 | AUROC | AUPRC | | --- | --- | --- | | ViTST | 89.2$\pm$2.0 | 53.1$\pm$3.4 | | P12 | AUROC | AUPRC | | --- | --- | --- | | ViTST | 84.2$\pm$1.7 | 48.0$\pm$4.7 | | PAM | Accuracy | Precision | Recall | F1 Score | | --- | --- | --- | --- | --- | | ViTST | 93.5$\pm$1.4 | 94.9$\pm$1.2 | 94.1$\pm$1.3 | 94.4$\pm$1.2 | Our approach still achieves superior performance. On the extensive dataset P19, the performance remains consistent, underscoring that as training data grows, the performance difference between the two settings becomes negligible. To reproduce the experimental results, you can follow the instruction below: First, ensure that you've acquired the processed data as directed in the `README` of our code repository. To create the images, execute the following commands sequentially: ```code cd dataset/P12data/process_scripts python ParamDescription.py python ConstructImage.py ``` Upon completion, you should have a directory named `differ_interpolation_6*6_images` created under `dataset/P12data/processed_data/`, containing all the created images . For training, navigate to the `code/Vision-Text/` directory. We provided a script for the P19 dataset in the README. For the P12 dataset, you can use the following script: ```code CUDA_VISIBLE_DEVICES=0 python3 run_VisionTextCLS.py \ --image_model swin \ --text_model roberta \ --freeze_vision_model False \ --freeze_text_model False \ --dataset P12 \ --dataset_prefix differ_interpolation_6*6_ \ --seed 1799 \ --save_total_limit 1 \ --train_batch_size 48 \ --eval_batch_size 196 \ --logging_steps 20 \ --save_steps 100 \ --epochs 4 \ --learning_rate 2e-5 \ --n_runs 1 \ --n_splits 5 \ --cutout_num 16 \ --cutout_size 16 \ --do_train \ ``` This script uses the default vision model Swin. If you're inclined to use the ViT model, simply modify the ``--image_model`` argument to `vit`. If you still encounter challenges in reproducing the results using the provided instructions, please provide details on the specific parameters and procedures you implemented. It seems that your current results resemble those of a ViT model trained from scratch, knowing the detailed parameter and argument you implemented will aid us in identifying any inconsistencies and offering more accurate guidance.
Summary: The paper investigates the use of pre-trained Vision Transformers (ViT) for both regularly and irregularly sampled time series classification. Two primary aspects are discussed: Transforming time series into images and performing time series classification using pre-trained ViT. Through a series of experiments, the authors illustrate the feasibility of this research direction. Strengths: The paper is well-organized and presents an engaging perspective on time series classification, particularly focusing on the potential of ViT in relevant tasks. The authors conducted extensive empirical evaluations to showcase the effectiveness of the proposed framework, which is a strong aspect of the work. Weaknesses: + The paper falls short in providing an in-depth analysis of several critical aspects, such as the advantages of using ViT over other foundational models for time series, factors contributing to successful cross-domain transfer when employing ViT for time series classification, and how key design factors impact performance. See the listed questions for details. + In addition, some crucial technical details are missing. For example, it is unclear which layers in ViT are frozen during downstream fine-tuning. Justification for the use of line graphs over other visualization strategies (e.g., frequency maps) is not adequately discussed in Section 3.1. The model's efficiency compared to specialized models in time series modeling is also not clear. + There are some writing issues. For example, the title should be “…for Irregularly Sampled Time Series Classification” unless the effectiveness of ViTST is evident on other time series tasks such as forecasting and imputation. Besides, it is unknown whether the backbone ViT model are frozen in Fig. 2. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: + What distinguishes the use of ViT for time series modeling from other foundational models like language or acoustic models (e.g., Voice2Series)? Are there unique advantages, beyond performance, to using ViT? + What theoretical factors contribute to such a successful cross-domain transfer? According to the experiments, some factors like line colors may significantly affect the model performance. A follow-up question is how to claim the robustness of the proposed framework in general time series classification when changing line shapes, colors, or even time series datasets to other domains other than healthcare. + What types of interpolation are used in the experiments? + What is the motivation behind Section 4.5? Is it for a second pre-training based on pre-trained ViT or is it for pre-training from scratch? + In the transformation of time series to images, why do the authors opt to combine different univariate time series into a single image, as opposed to potential alternative implementations like introducing an additional layer to fuse different input images before feeding them into ViT? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors have not adequately addressed the limitations and potential negative societal impacts of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your valuable feedback. Here is our response to address your concerns: **Response to weakness**: - **Layer frozen**: In our implementation, we did not freeze any of the layers. All the parameters are tunable during fine-tuning on the time series dataset. - **Visualization strategy**: Our primary is to showcase the potential of a simple and intuitive approach that uses vision transformers for time series modeling. Line graphs were selected as they are straightforward for humans to interpret. We show that such a strategy has already achieved competitive results. Our current work serves as a pivotal first step to exploring the adaptation of pre-trained vision transformers for time series modeling, and we demonstrate its feasibility. We value the potential of exploring other visualization techniques to further enhance the approach in future work. - **Model Efficiency**: we present the performance comparison of our method with several representative specialized methods on the irregularly sampled time series dataset in Table 2 and Figure 3. We also tested on ten more diverse time series classification datasets as described in Section 4.4. The results from these experiments indicate the efficacy and versatility of our approach in addressing a diverse range of time series shapes and types. - **Writting issues**: Thank you for pointing it out. We will change the title as suggested. As for Figure 2, none of the layers or parameters are frozen during fine-tuning on the time series dataset. We will add a notification for clarity. **Response to questions**: - **Comparison with other fundamental methods for time series modeling**: Our work aims to showcase that vision transformers can be adapted for time series modeling. We do not claim it is the optimal or only foundation model that could work for this purpose. However, compared with language models and voice models (Voice2Series) for time series modeling, our vision-based approach might offer the following advantages beyond performance: (1) **Simplicity and Compression**: Translating time series data into images provides a more compressed and straightforward representation than converting them into language. In our preliminary experiments, we attempted to transform time series data into text. We observed challenges, especially with multivariate time series that have a significant number of variables or long sequences. This is because most of affordable language models have a context window ranging less than 4096 tokens. Even if each observation is considered a single token, accommodating time series that surpasses this context window size is problematic. In contrast, time series data of any dimension can seamlessly fit into a single image, unconstrained by length or magnitude. (2) **Generality**: While Voice2Series showcases potential in handling univariate time series, its applicability for multivariate and irregularly sampled time series remains unclear. (3) **Maturity of Domain**: using our vision-based method provides the opportunity to leverage well-studied techniques in the established computer vision field. - **Theoretical factors contribute to successful transfer**: The success is based on the hypothesis that the model pre-trained on large image datasets can capture generic features within the images, which can then be fine-tuned for specific tasks like time series classification. Our experiments indicate that certain factors within the images, such as line colors/styles/width and grid orders (more details can be found [here](https://openreview.net/forum?id=ZmeAoWQqe0&noteId=13G2qHdC3K)), and also the pre-training of vision transformers do influence performance. We recognize the need for a more comprehensive theoretical analysis in future studies. This work represents the initial exploration of utilizing vision transformers for time series modeling, and our results demonstrate its feasibility and promise. - **Robustness to general time series classification tasks**: we conducted experiments on two healthcare datasets (P19/P12), a human activity dataset PAM, and also ten general time series classification datasets as outlined in Section 4.4. In total, we assessed our method on 13 distinct datasets, employing distinguished image rendering settings for each dataset. Our results show that as long as the images are rendered with the same setting in the training set and evaluation set, the performance of our approach is consistently competitive. - **Interpolation**: Taking the minimalist approach, our approach employs linear interpolation, i.e., using straight lines to connect consecutive data points on the line graph. The main focus of our work is to show that adapting vision transformers for modeling time series line graphs work, and our current results have proven that. However, we value the potential of exploring other interpolation methods to further enhance the performance. - **Motivation behind Section 4.5**: As mentioned above, our approach bridges the time series and computer vision domain, opening up the possibility of leveraging well-studied computer vision techniques into the time series domain. Therefore, in Section 4.5, we explored the prevalent masked image modeling method for time series modeling. Likewise, we started the self-supervised learning with pre-trained checkpoints. - **Motivation of image combination**: In our approach, we plot the time series of each variable, which can be seen as a sub-image. These sub-images are assembled into a single "super image.". In our preliminary experiment, we tested obtaining separate image representations for each sub-image and concatenating and feeding them into a prediction layer. However, this approach consistently underperformed the unified image processing technique. We hypothesize that integrating them into a single image enables the model to discern more intricate correlations among different variables directly from raw features. --- Rebuttal Comment 1.1: Title: Response to author rebuttal Comment: Dear reviewer, The author rebuttal appears to have presented several targeted responses to your questions. Are your questions appropriately addressed? If they are, would you consider re-assessing your score in light of them. If not, please do provide additional context and feedback to the author. In either case, please provide an acknowledgement of the effort the authors put in, why your questions have (or have not) been addressed and what your assessment of the work is in light of this evidence with a view to reach consensus with the other reviewers on this work. -AC --- Rebuttal Comment 1.2: Title: Reviewer's reply to the rebuttal Comment: I would like to express my gratitude to the authors for their detailed responses, which have helped to clarify several of the issues that were previously raised. After carefully reviewing the manuscript, considering the comments of other reviewers, and evaluating the authors' rebuttals, I am inclined to adjust my score to borderline reject. However, I must emphasize that the paper, in its current form, is not yet in a condition suitable for publication at a conference of NeurIPS' caliber. There are several aspects that require more comprehensive analysis, and these issues have not been fully resolved in the authors' rebuttal: + What theoretical factors contribute to such a successful cross-domain transfer? As this is the main argument in this research, answering this question is crutial. + I see no experiements related to model efficiency comparisons. The aforementioned results (e.g., Tab. 2 and Fig. 3) are not related to this. Otherwise, it makes no sense to claim the *simplicity and compression* in this research. + The authors' rebuttal does not convincingly justify the impact of line shapes and colors on the robustness of the proposed method. + Regarding the motivation of image combinbation, I see no results and discussion related to the mentioned preliminary experiemnts. Please consider to clarify this: *In our preliminary experiment, we tested obtaining separate image representations for each sub-image and concatenating and feeding them into a prediction layer. However, this approach consistently underperformed the unified image processing technique.* --- Reply to Comment 1.2.1: Title: Response to the reviewer's reply Comment: Thank you for your thoughtful feedback. We appreciate the opportunity to clarify and address the concerns raised. **R1:** The success in cross-domain transfer, as we hypothesized, might be rooted in the pre-training stage with masked image modeling. As found in [1], vision transformer models pretrained with masked image modeling introduce locality inductive bias across all layers, enlarging the receptive field. These models have shown superior performance in tasks with weak semantics like geometric and motion or fine-grained classification, which is analogous to our task. Nonetheless, we acknowledge the importance for more in-depth explorations and intend to pursue this in future work. **R2:** Regarding **Simplicity and compression**, our comparison of simplicity and compression is with **language foundational models**, in response to the reviewer's question on "**What distinguishes the use of ViT for time series modeling from other foundational models like language or acoustic models (e.g., Voice2Series)?**". As we mentioned, "Translating time series data into images provides a more compressed and straightforward representation **than converting them into language**. In our preliminary experiments, we attempted to transform time series data into text. We observed challenges, especially with multivariate time series that have a significant number of variables or long sequences. This is because most affordable language models have a context window ranging less than **4096** tokens. Even if each observation is considered a single token, accommodating time series that usually has tens of thousands of observations is problematic. In contrast, time series data of any dimension can seamlessly fit into a single image, unconstrained by length or magnitude. " As for the efficiency compared to other models, we do not claim our model is more efficient than other non-vision baselines. The results and discussion can be found at https://openreview.net/forum?id=ZmeAoWQqe0&noteId=kAotjOzWij. **R3:** We apologize for previously omitting the link to the results on the influence of line shapes and colors on our method's robustness. Kindly see https://openreview.net/forum?id=ZmeAoWQqe0&noteId=13G2qHdC3K for the results and discussion. **R4**: In our preliminary experiments, we tested obtaining the patch representations in each sub-image separately and then concatenated the patch representations of all sub-images to feed into the final prediction layer for prediction. Compared with our default combination approach, the difference lies in whether a patch can attend to patches from other sub-images and the position embedding when learning the patch/image representations. All the other parameters remain the same. We tested on ViT and the performance comparisons are shown below: | P19 | AUROC | AUPRC | | --- | --- | --- | | ViT | 87.9$\pm$2.5 | 51.6$\pm$3.7 | | ViT-Subimage| 85.1$\pm$1.5 | 47.9$\pm$3.4 | | P12 | AUROC | AUPRC | | --- | --- | --- | | ViT | 84.8$\pm$1.3 | 48.1$\pm$3.8 | | ViT-Subimage| 77.6$\pm$3.2 | 35.5$\pm$6.2 | | PAM | Accuracy | Precision | Recall | F1 score | | --- | --- | --- | --- | --- | | ViT | 93.4$\pm$0.7 | 94.7$\pm$0.9 | 94.1$\pm$0.7 | 94.3$\pm$0.7 | | ViT-Subimage| 90.4$\pm$1.3 | 92.9$\pm$1.0 | 91.12$\pm$0.9 | 91.9$\pm$1.0 | It is evidenced that attending to patches from other sub-images offers advantages. This likely allows for capturing cross-variable correlations at a granular level within the self-attention layers, as opposed to only at the final linear prediction layer. **References:** [1] Xie, Zhenda, et al. "Revealing the dark secrets of masked image modeling." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
Summary: The paper focuses on learning from irregularly sampled time series data. The paper presents a simple approach that converts irregularly sampled time series into an image where different input channels are line graphs. The converted images are then modeled using a standard Transformer model. Experiments show that the proposed approach outperforms the SOTA approaches on several datasets. Strengths: - The paper focuses on the task of learning from irregularly sampled data which is important in many domains. - Experimental results show the effectiveness of the approach when compared to baselines and other recent approaches. - The paper is clear and well written. Weaknesses: - The irregularly sampled time series data are represented by a line graph where observed points are connected using a straight line which is a very adhoc way to deal with missing values and irregularity. - The novelty is marginal as the proposed approach mainly uses a standard Transformer model where time series data is fed as images. - The standard deviation of results in Table 2 seems very high. It is not immediately clear if the proposed approach achieves statistically significant performance or it's just noise. - The proposed approaches uses static features present with the data. It is not immediately clear how these static features were used with the baseline approaches. - The paper notes that all the methods were trained for 20 epochs. Did they all converge in 20 epochs? The baseline approach should be trained till convergence. - The paper is missing experiments on MIMIC-III dataset, most commonly used dataset in this space. - The paper is missing comparisons with recent ODE based approaches which achieve SOTA performance. - Does the size of image change if the sequence length changes? Technical Quality: 3 good Clarity: 3 good Questions for Authors: I have mentioned my concerns in the weaknesses section. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback. Below is our response addressing your concerns. **Response to W1: adhoc way to deal with missing values and irregularity** While our method may seem ad-hoc, it is simple and notably effective. It largely simplifies model design for irregular time series modeling and bridges time series analysis with the computer vision domain. As an initial exploration, we demonstrate such an approach is feasible and promising. Further exploration of better way to handle missing data is encouraged. **Response to W2: marginal novelty as it mainly uses a standard Transformer model** Our paper primarily demonstrates that the pre-trained vision transformers, prevalent in computer vision tasks, can be easily adapted for time series classification by representing these series as images. Our approach is simple, effective, and general. It not only largely simplifies dedicated model design for irregular time series modeling but also bridges the time series domain with the computer vision domain. We recognize the potential of developing vision transformers tailored for time series modeling as a promising direction for future improvement. However, our current work represents the critical first step and demonstrates the feasibility and promise of this approach. **Response to W3: standard deviation of results in Table 2** As shown in Table 2, our approach exhibits the lowest standard deviation among all compared methods, except for Transformer-mean on the P19 dataset. **Response to W4: how the static feature is used in the baseline approaches** The static feature is converted into a vector and incorporated with the time series data for modeling in the baseline approaches. In our experiment, we convert the static feature into natural language sentences and use Roberta to decode it. As mentioned in B2 in the Appendix, we have also tried to convert the static feature into a vector and utilize MLP to encode it. Specifically, we used a 2-layer MLP with a hidden dimension of 128 and an output dimension of 96 to encode the static feature. The obtained output is concatenated with the representation learned by the vision transformer to make the prediction. We also presented the performance of our approach without using static features (only a Swin to model time series image data). The corresponding results for these experiments are detailed below. | P19 | AUROC | AUPRC | | --- | --- | --- | | Swin | 89.4$\pm$1.8 | 50.2$\pm$3.0 | | Swin-MLP | 88.6$\pm$1.3 | 51.4 $\pm$3.7 | | Swin-Roberta | 89.4$\pm$1.9 | 52.8$\pm$3.8 | | P12 | AUROC | AUPRC | | --- | --- | --- | | Swin | 84.3$\pm$0.6 | 49.3$\pm$3.7 | | Swin-MLP | 84.6$\pm$0.9 | 48.7 $\pm$3.2 | | Swin-Roberta | 85.6$\pm$1.1 | 49.8$\pm$2.5 | As can be seen, when both our method and the baselines use the vector converted from static feature for modeling, our approach still outperforms the baselines. Utilizing a language model to encode patient information, which preserves its inherent meaning, appears more effective than simply employing an MLP to process vectorized features. Furthermore, leveraging a language model may offer broader applicability, especially in contexts with textual data like clinical notes. **Response to W5: did all the baseline coverage in 20 epochs** All baseline models were trained for a maximum of 20 epochs and converged within this period. This setup is consistent with previous work [1]. **Response to W6/7: missing experiment on MIMIC-III and comparison with recent ODE-based methods** We will endeavor to incorporate experiments on MIMIC-III and comparisons with recent ODE-based methods in our revised version. **Response to W8: Does the size of image change if the sequence length changes?** No, the sequence length will not directly influence the size of images. Regardless of the length, any time series can be fitted into a line graph in a grid cell. The dimensions of this grid, which are user-defined, will determine the overall image size. To illustrate, let's consider the EW dataset detailed in Section 4.4. It has a sequence length of 17984 with 6 variables. This can be represented in a 2x3 grid. We choose the grid size of 128x128, resulting in an image size of 256x384. However, one can also choose any other grid size and subsequently change the corresponding image size. [1] Zhang, Xiang, et al. "Graph-guided network for irregularly sampled multivariate time series." arXiv preprint arXiv:2110.05357 (2021). --- Rebuttal Comment 1.1: Title: Response to author rebuttal Comment: Dear reviewer, The author rebuttal appears to have presented several targeted responses to your questions. Are your questions appropriately addressed? If they are, would you consider re-assessing your score in light of them. If not, please do provide additional context and feedback to the author. In either case, please provide an acknowledgement of the effort the authors put in, why your questions have (or have not) been addressed and what your assessment of the work is in light of this evidence with a view to reach consensus with the other reviewers on this work. -AC
Summary: This paper describes a surprisingly simple and effective approach for applying computer vision Transformer models to time series classification. Multivariate time series inputs are plotted in a grid to produce images that are used to fine-tune pretrained vision Transformer models. This approach achieves state-of-the-art results on irregularly sampled datasets and competitive results on regularly sampled datasets. Aside from the empirical efficacy, the results are relevant to understanding the capabilities of Transformers and the effectiveness of cross-domain pretraining. Strengths: I think the basic finding of the paper - that pre-trained vision Transformers are effective at performing time series classification - is original and significant. It raises very interesting questions about the general efficacy of pre-trained Transformers and the relationship between time series and visual data that could inform future work. In a practical sense, I have some concerns detailed below but I generally appreciate the simplicity and efficacy of the proposed approach. The basic results are convincing and the ablation results are extensive and informative. Most of the questions I would have had about the model are already addressed in the main paper or Appendix. The paper was well-written and clear. Results are reproducible since full code was provided. Weaknesses: To use the proposed method requires making a number of somewhat arbitrary decisions on how to convert time series to images: what layout to use, what order to put the time series in, what colours to use, line width/style, etc. The results in Table 3 and the Appendix show that decisions like these can affect performance even though they don't strictly change the information contained in the input. While these factors can be empirically tuned, the lack of robustness to seemingly arbitrary choices is concerning. In particular, the use of matplotlib to generate inputs makes the input construction somewhat poorly controlled. While I appreciate the simplicity of using an existing library, this leaves many implementation details about how exactly plots are rendered up to the library. Again, given that purely visual variations can affect model performance, it's not clear how important these implementation details are. I would have preferred to see a more precise approach that formally determined what numeric value is assigned to each pixel, making these details apparent. The range of irregularly sampled time series datasets is somewhat small to draw general conclusions from, but this is in line with existing work in this area. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: How were line colours selected? If random, what distribution of colours was used? Is there any justification for the specific choice made? For your self-supervised learning experiments, did you start with a pretrained model? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Limitations are not explicitly discussed. No societal impact concerns come to mind, but I think technical limitations should have been addressed. These could include comparing training and inference resource requirements to other models and acknowledging points like those I raised in the Weaknesses section if the authors agree they are limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's recognition of the value and strengths of our work, as well as the constructive feedback and suggestions. We aim to address your concerns in our response as follows: **Response to W1: Robustness to seemingly arbitrary choices** Our approach does involve several decisions during the image transformation. We have conducted ablation studies on the choice of layout. While we have already conducted ablation studies concerning the layout choice, we've further expanded our investigation to cover additional parameters. **Order and Color:** Our default approach orders the variables based on the number of observations within each. For comparison, we shuffled this order three times and assessed the performance. We manually assigned colors, with the principle of making adjacent line graphs have distinguishable colors. Likewise, we randomly selected colors over 3 times for comparison. The results are provided below. On the P19 and P12 datasets which have varying observations across different variables, the sorted order outperforms randomly shuffled orders. On the PAM dataset, where each variable has a consistent number of observations, the order doesn't have a significant impact. Regarding color selection, our manual choices weren't optimal for the P19 dataset; random color selections yielded superior results, boosting the AUPRC score to 54.5. The PAM dataset, however, remained less affected by changes in line color. | P19 | AUROC | AUPRC | | --- | --- | --- | | Default | 89.4$\pm$1.9 | 52.8$\pm$3.8 | | Random order 1 | 88.3$\pm$1.8 | 49.9$\pm$3.2 | | Random order 2 | 88.2$\pm$1.6 | 51.0$\pm$3.9 | | Random order 3 | 88.5$\pm$2.3 | 51.9$\pm$2.6 | | Random color 1 | 89.3$\pm$1.4 | 53.6$\pm$1.9 | | Random color 2 | 89.1$\pm$1.9 | 54.5$\pm$3.5 | | Random color 3 | 88.9$\pm$2.1 | 52.9$\pm$2.7 | | P12 | AUROC | AUPRC | | --- | --- | --- | | Default | 85.6$\pm$1.1 | 49.8$\pm$2.5 | | Random order 1 | 84.3$\pm$2.2 | 48.0$\pm$4.5 | | Random order 2 | 84.2$\pm$1.7 | 47.6$\pm$3.7 | | Random order 3 | 83.9$\pm$1.5 | 46.9$\pm$4.0 | | Random color 1 | 85.0$\pm$1.6 | 49.3$\pm$3.7 | | Random color 2 | 83.9$\pm$1.5 | 48.6$\pm$2.7 | | Random color 3 | 85.2$\pm$1.9 | 49.2$\pm$3.2 | | PAM | Accuracy | Precision | Recall | F1 Score | | --- | --- | --- | --- | --- | | Default | 96.1$\pm$0.7 | 96.8$\pm$1.1 | 96.5$\pm$0.7 | 96.6$\pm$0.9 | | Random color 1 | 94.5$\pm$1.3 | 95.8$\pm$0.8 | 95.0$\pm$1.2 | 95.3$\pm$1.0 | | Random color 2 | 95.2$\pm$1.5 | 96.5$\pm$1.1 | 95.2$\pm$1.4 | 95.8$\pm$1.3 | | Random color 3 | 95.4$\pm$1.7 | 96.6$\pm$0.9 | 95.9$\pm$1.5 | 95.9$\pm$1.1 | | Random order 1 | 95.5$\pm$1.0 | 96.8$\pm$0.8 | 95.7$\pm$1.0 | 96.2$\pm$0.9 | | Random order 2 | 95.2$\pm$0.7 | 96.6$\pm$0.6 | 95.6$\pm$0.8 | 96.0$\pm$0.7 | | Random order 3 | 95.4$\pm$0.8 | 96.6$\pm$0.7 | 95.8$\pm$1.1 | 96.1$\pm$0.7 | **Line width/style and marker size/style:** We configured line style to solid, and marker as '*'. For the P12 and P19 datasets, the line width and the marker size are set to 1 and 2, while for the PAM dataset, they are set to 0.5 and 0.5. The guiding principle is to optimize the line graphs for human visualization, ensuring that the lines effectively show trends and markers clearly indicate observed data without obscuring the lines. We also tested the other configuration. Due to limited space, we only show the results on P19 here. | Line style | Line width | Marker | Marker size | AUROC | AUPRC | | --- | --- | --- | --- | --- | --- | | Solid | 1 | * | 2 | 89.4$\pm$1.9 | 52.8$\pm$3.8 | | Dotted | 1 | * | 2 | 88.7$\pm$2.0 | 53.3$\pm$3.4 | | Dashed | 1 | * | 2 | 88.5$\pm$2.8 | 53.3$\pm$4.2 | | Solid | 2 | * | 2 | 88.5$\pm$2.3 | 53.6$\pm$3.4 | | Solid | 3 | * | 2 | 88.9$\pm$1.9 | 52.8$\pm$3.2 | | Solid | 1 | o | 2 | 88.8$\pm$2.6 | 53.2$\pm$3.8 | | Solid | 1 | ^ | 2 | 88.8$\pm$2.2 | 53.1$\pm$4.0 | | Solid | 1 | * | 1 | 88.6$\pm$2.4 | 52.6$\pm$3.6 | | Solid | 1 | * | 3 | 89.2$\pm$2.1 | 53.1$\pm$4.2 | From the result, we can see that the line style has a significant impact on the performance, while markers do not. As for other factors, a good combination of line width and marker size might lead to better performance. Our approach can achieve competitive results in most settings. **Response to W2: more precise approach for image rendering.** We greatly value your insightful feedback. Our current study is a pivotal first step in investigating the potential of vision transformers applied to images transformed from time series. Currently, we've adopted a straightforward approach using matplotlib for image rendering, and our results suggest the feasibility and potential of this method. In light of your suggestion, we are considering directly assigning pixel values for image rendering, mirroring prior work on synthetic images used in vision model pre-training [1]. We leave it as future work. **Response to Q1: How were line colours selected?** As highlighted in our response to W1, for datasets with a limited number of variables, we manually selected the colors, ensuring that adjacent line graphs had distinguishable colors. For datasets with a larger number of variables, we divided the RGB value range into different slices and shuffled them, ensuring neighboring time series have distinct RGB values. **Response to Q2: For your self-supervised learning experiments, did you start with a pretrained model?** Yes, we start with the pre-trained checkpoint. It is an exploration of the potential of integrating vision techniques into the time series domain. Similar to masked image modeling work [2] in CV, we start with the pre-trained checkpoint and apply self-supervised learning. **References** [1] Kataoka, Hirokatsu, et al. "Pre-training without natural images." Proceedings of the Asian Conference on Computer Vision. 2020. [2] Xie, Zhenda, et al. "Simmim: A simple framework for masked image modeling." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my questions. The additional ablations look to be in line with the existing ones in the paper - it's good to have a more detailed understanding of the effects of these design choices, but some of them still show a somewhat concerning degree of variance. It makes sense that clarity to humans might just be the best heuristic here. My overall assessment is still the same: the general findings of the paper are original and interesting, and the presentation and reproducibility are very good. I still have some practical concerns around robustness and I think limitations should have been discussed.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: For the classification task based on irregular time series data, the authors introduced Vision Time Series Transformer (ViTST) approach where irregular time series data is displayed as line graph then fed to pretrained transformer type models. The authors tested this approach using several time series data from medical domain and human activity domain, and the result shows confident advantages of the proposed approach. Strengths: Simple idea but very interesting. Could contribute to many domains dealing with time series data. Weaknesses: Time series data is everywhere but the authors tested only a few dataset from medical domain and human activity domain. Thus "any shape" in line 357 sounds too strong at this time. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Have you tested multi-class classification tasks and/or regression tasks? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Since this approach could have a potential to expand widely, it would be better to state "Limitation" of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for valuing our work and recognizing its strengths. We aim to address your concerns with our response below. **Response to W1: Time series data is everywhere but the authors tested only a few dataset from medical domain and human activity domain. Thus "any shape" in line 357 sounds too strong at this time.** In addition to the three irregularly sampled datasets P19, P12, and PAM, we also evaluated our approach on ten representative multivariate time series datasets from the UEA Time Series Classification Archive [1], as outlined in **Section 4.4**. These datasets have diverse characteristic in terms of number of variables, length, training size, and number of classes. Table 4 shows both the number of variables, length of different datasets and the performance comrasion on these datasets. More details on the datasets can be found in the Table 7 in the Appendix. For clarity, we also list it here: | Dataset | #Variables | #Classes | Length | Train Size | | --- | --- | --- | --- | --- | | EC | 3 | 4 | 1,751 | 261 | | UW | 3 | 8 | 315 | 120 | | SCP1 | 6 | 2 | 896 | 268 | | SCP2 | 7 | 2 | 1,152 | 200 | | JV | 12 | 9 | 29 | 270 | | SAD | 13 | 10 | 93 | 6599 | | HB | 61 | 2 | 405 | 204 | | FD | 144 | 2 | 62 | 5890 | | PS | **963** | 7 | 144 | 267 | | EW | 6 | 5 | **17984** | 128 | Table 4 in our paper presents the performance comparison of ViTST against six baseline methods designed for regular time series classification. From the table, we can see that our approach achieves consistently strong performance. Its average performance over the ten datasets are second-best, only slightly lower than TST. Notably, PS has the highest number of variables at 963, while EW boasts a longest sequence length of 17,984. Our approach still achieves competitive results on both, suggesting its effectiveness in handling time series with massive variables and/or long sequence. In summary, we assessed our technique on three datasets for irregularly sampled classification and ten for regular time series, covering a range of ''shapes''. Our approach consistently achieve competitive results. It's important to emphasize that most methods tailored for irregularly sampled time series often struggle with regularly sampled data, and vice versa. **Response to Q1: Have you tested multi-class classification tasks and/or regression tasks?** Yes, we tested multi-class classification tasks on multiple datasets, as shown in table above. EC, UW, JV, SAD, PS, EW, and PAM datatsets are all used for multi-class classification. **References** [1] Bagnall, Anthony, et al. "The UEA multivariate time series classification archive, 2018." arXiv preprint arXiv:1811.00075 (2018). --- Rebuttal Comment 1.1: Comment: Thanks for the responses. As the other reviewers also pointed, to state the limitation in the main text would be quite important since this paper will be a good starting point for the field. I'd like to keep my rating.
Summary: The authors propose a method to model irregularly sampled time series. The method is based on transforming numerical time series data to line graphs and then applying pretrained vision transformers to that data. For multivariate datasets, every variable is plotted separately, and plots are aggregated in a grid. The authors provide exhaustive evaluation on different datasets, including leaving-sensors-out settings and regularly sampled time series data. Strengths: The paper is well written and easy to understand. The method proposed by the authors is extremely simple, yet surprisingly effective on many time series modeling tasks, as shown by exhaustive experimental evaluation. Weaknesses: - I don’t see the authors claims made in the abstract and introduction backed by their empirical results. It is difficult to judge, whether their proposed method indeed outperforms previous approaches “significantly” (L. 10, L. 43, L. 67) as such an analysis is missing. On most of the metrics for the P19 and P12 data at least, improvements do not look significant to me. On the PAM dataset results are likely significant (analysis is also missing here). - The PAM dataset (the only one where we likely actually see a significant improvement over Raindrop) is also the one with the fewest number of variables (17 vs 34 & 36 for P19 and P12, respectively). I assume that the proposed method performs much worse for datasets with many variables, as also shown in the authors experiments on the FD, and PS datasets, where ViTST performs indeed much worse than other methods. - The authors do not comment on the computational cost of their method, which is likely much higher than for most non-vision based approaches. - Recently, Semenoglou et al. proposed a very similar method, albeit limited to single-variable time series. They plot a univariate time series and feed this plot into a CNN to predict future observations of the same series. While I think that the present work improves on the work by Semenoglou et al. in several regards (multivariate, pretrained ViT outperforming pretrained ResNet, …), the authors should include a reference to Semenoglou et al. in their related work. Additionally, I don't think statements such as "This paper studies the problem from a whole new perspective by transforming irregularly sampled time series into line graph images and leveraging powerful vision transformers for time series classification in the same way as image classification." (L. 4-7) are valid, given that the idea to transform time series to images and then applying image classification networks is not novel. - The authors do not comment on limitations of their method. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - For the PS dataset, the authors must have used much smaller (I assume 12x12) sizes for their plots, am I correct? - What is the computational cost of the proposed method? - Did the authors conduct experiments on whether the order of the plots in the grid matters? - Based on the ablation studies provided by the authors it seems likely that most of the performance comes from pretraining of the transformer models. Wouldn’t a large pretrained transformer for numerical time series modeling be likely to perform similar or better than the method proposed by the authors? If the authors agree, then I would like to see a brief discussion of this in an updated version of the manuscript. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Unfortunately, the authors do not comment on limitations of their work. I see the following possible limitations and would like to see an open discussion of those in an updated version of the manuscript: - Ability to handle extremely large number of variables - Computational costs of the method Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback. Below are our responses to your concerns: **Response to W1: Concerns on “significant” improvement** We base our claim of "significant" improvement on extensive comparisons across three datasets and their associated metrics. In all the evaluated datasets and associated metrics, our method consistently shows superior performance. - P19 dataset: Our method outperforms the second-best approach, Raindrop, by 2.4 points in AUROC and 1.0 points in AUPRC. In addition, our approach has a lower variance than Raindrop and the third-best approach DGM2-O. - PAM dataset: The advantage of our approach over the compared baselines are especially signification. - P12 dataset: While Raindrop is considered a strong baseline on P19 and PAM datasets, it lags behind our method by 2.8 points in AUROC and 5.8 in AUPRC. DGM2-O and mTAND are respectively the second-best in AUROC and AUPRC for this dataset. However, neither of the baselines achieves reasonable performance on both AUROC and AUPRC metrics. In conclusion, none of the specialized methods we evaluated consistently matched or surpassed our method across all metrics and datasets. Beyond the numbers, what we want to show is that our simple approach can match or surpass specialized methods regardless of the margin. **Response to W2: Performance on datasets with many variables** While the number of variables may play a role, other factors could also influence why our method's advantage on the P12 and P19 datasets isn't as significant as on the PAM dataset, such as the number of observations and/or the length of the time series. Regarding the PS dataset with the largest number of variables (963), our approach achieves better performance than the strong baseline TST and is second-best only behind XGBoost. On the FD dataset, it is comparable to XGBoost and ROCKET. It's worth noting that different models may be optimized for specific data types. For instance, our method excels in handling long sequences (EW dataset with a sequence length of 17984). When evaluating performance across all datasets, our method is second-best. It's important to emphasize that most methods tailored for irregularly sampled time series often struggle with regularly sampled data, and vice versa. **Response to W3: Computational cost** Our experiments were conducted on Nvidia A6000 GPUs. It takes around 24 mins to run 20 epochs on the PAM dataset with a batch size of 72, 32 mins to run 4 epochs on the P12 dataset, and 58 mins to run 2 epochs (with upsampling) on the P19 dataset—all on a single GPU. While our computational costs might be higher than some non-vision methods, they appear to be within the range that could be considered acceptable in the context of current ML practices, considering the widespread use of large language and vision models nowadays. **Response to W4/5: Related work and limitations** We will add the work of Semenoglou et al. in the related work and add the limitation section in the updated version. **Response to Q1: For the PS dataset, the authors must have used much smaller (I assume 12x12) sizes for their plots, am I correct?** One can choose any size for the plot. For example, we tried the size of each plot (grid cell) as 24x24, where the image size increased to 768x768. With this adjustment, the performance of ViTST improves from 91.3 to 92.4, compared with using a 12x12 plot size. **Response to Q2: Computation cost** Please refer to the response to W3. **Response to Q3: Impact of grid order on results** We have conducted experiments to examine the impact of the grid order. By default, the grid order was sorted based on the number of observations. We also conducted tests with grids in random orders. The results from these tests are outlined below: | **P19** | **AUROC** | **AUPRC** | | --- | --- | --- | | Sorted order | 89.4$\pm$1.9 | 52.8$\pm$3.8 | | Random order 1 | 88.3$\pm$1.8 | 49.9$\pm$3.2 | | Random order 2 | 88.2$\pm$1.6 | 51.0$\pm$3.9 | | **P12** | **AUROC** | **AUPRC** | | --- | --- | --- | | Sorted order | 85.6$\pm$1.1 | 49.8$\pm$2.5 | | Random order 1 | 84.3$\pm$2.2 | 48.0$\pm$4.5 | | Random order 2 | 84.2$\pm$1.7 | 47.6$\pm$3.7 | | **PAM** | **Accuracy** | **Precision** | **Recall** | **F1 Score** | | --- | --- | --- | --- | --- | | Default order | 96.1$\pm$0.7 | 96.8$\pm$1.1 | 96.5$\pm$0.7 | 96.6$\pm$0.9 | | Random order 1 | 95.5$\pm$1.0 | 96.8$\pm$0.8 | 95.7$\pm$1.0 | 96.2$\pm$0.9 | | Random order 2 | 95.2$\pm$0.7 | 96.6$\pm$0.6 | 95.6$\pm$0.8 | 96.0$\pm$0.7 | The order does affect performance, most notably in the P19 and P12 datasets, which have varying observations across different variables. On the other hand, the PAM dataset, with a uniform observation count for every variable, displays less variability in the performance. A possible explanation for this could be that clustering regions with higher observation plots, which are also more informative, allows the model to better capture their correlations and allocate attention effectively. **Response to Q3: Effectiveness of a large pretrained transformer for numerical time series modeling** A large pre-trained transformer for numerical time series modeling might offer comparable or superior performance to ViTST. However, such a transformer would necessitate extensive time series data for pretraining, requiring significant data collection efforts. It's important to emphasize that our intention is not to advocate that our vision transformer-based approach is the optimal option for performance. Instead, we aim to demonstrate that the publicly available pre-trained vision transformer can easily adapt to time series modeling and achieve satisfying results. This method largely simplifies model design and data collection efforts and could also bridge the time series and computer vision domain, opening up the possibility of adapting established vision techniques to the time series domain and facilitating multi-modal research. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their detailed response and the additional experiments. I find the results on the impact of grid order quite interesting and think the authors explanation for this is plausible. Can the authors also comment on the computation cost of their method for inference compared to other non-vision based methods? I fully agree with the authors response to my Q3. Please include this important point in the discussion/limitation section! --- Reply to Comment 1.1.1: Title: Response to the reviewer's reply Comment: Thank you for your thoughtful feedback and for taking the time to review the additional experiments. We appreciate your agreement with our explanations. We will ensure that a discussion on the limitations of the current approach is incorporated in the updated version of the paper. As for the inference cost comparison, we list the inference time (in seconds) of different methods on the test sets of three datasets below. All the inferences are made in a single Nvidia A6000 GPU. | Datatset | Transformer | mTAND | SeFT | Raindrop | MTGNN | DGM^2-O | GRU-D | ViTST | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | P19 | 0.21 | 0.52 | 2.72 | 3.05 | 3.62 | 2.47 | 31.04 | 44.51 | | P12 | 0.12 | 0.44 | 0.97 | 1.27 | 1.46 | 2.80 | 10.13 | 12.14 | | PAM | 0.06 | 0.23 | 0.89 | 0.67 | 1.16 | 2.98 | 4.55 | 5.30 | Our vision-based method consumes more inference time than the non-vision baselines. However, we believe this cost remains within an acceptable range in the context of today's ML practice and medical applications. We will include the computation cost in the updated version of the paper. Once again, thank you for your valuable insights.
null
null
null
null
Optimistic Meta-Gradients
Accept (poster)
Summary: This paper studies a connection between optimization and meta-learning. For the case of a single task, it shows an equivalence between GD with momentum and GBML, and another equivalence between GD with Nesterov acceleration and the recent Bootstrapped Meta-Gradient algorithm. Theoretical analyses are done for these algorithm, showing that GBML speeds up convergence by a constant factor but is not able to improve on the $O(1/T)$ rate, while BMG is able to improve the rate to $O(1/T^2)$. Experiments are done on quadratic minimization and ImageNet image classification that confirm the theory. Strengths: 1. The mathematical argument in the paper is very clearly presented, e.g. the authors roughly outline proof techniques. 2. Overall the paper appears to be sound, although I did not check the math. Experiments were done on both "toy" examples and a more realistic ImageNet problem. 3. As far as I know, the connection between BMG and GD with Nesterov acceleration is novel. Weaknesses: In my opinion, the major weakness of the paper is the fact that it only considers the single task setting. However, one of the major motivations behind meta-learning is that information may be shared between tasks in order to improve convergence speeds for all tasks, and the multi-task setting is arguably more popular in the literature. It would greatly strengthen the work to consider the theory for a multi-task setting, although I understand that it might be more appropriate for future work. Minor: 1. The objective function is convex is a fairly strong assumption. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Is it fair to search over the range of hyperparameters in the experiments in Section 7.1? Usually when this is done the final results are provided for a held-out test set (a new quadratic function in this case), but this does not seem to be the case here. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors do not have a broader impact section, and I agree since this work discusses optimization methods. However, they do not discuss the limitations/weaknesses of their work, which I think should be added to the final draft. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review, we are glad you found the paper well presented with novel insights. We appreciate that multi-task meta-learning is a common problem setting and one that we do not consider in this paper. While multi-task meta-learning has been fairly extensively analyzed, we are not aware of any theoretical work in the single-task setting (except some papers in the online optimization community on meta-learned learning rates). The goal of this paper is to address this gap, hence the focus on single task online meta-learning. Notably, as the reviewer points out, rates of acceleration in the multi-task setting rely on transfer across tasks. Reading these papers, one may be tempted to conclude that meta-learning can only achieve acceleration by transferring knowledge across tasks. We believe this to be a commonly held belief in the community. Significantly, our paper proves the *opposite* to be true: meta-learning can achieve acceleration *without* the need for multiple tasks! We believe this to be a fundamental insight that contravenes current mainstream thinking within the meta-learning community. Fundamentally, we show that meta-learning is the same as standard optimization, just non-linearly transformed. For us, this was an eye-opening discovery that has changed how we think about meta-learning. We hope that this paper can help others in the field to similarly deepen their understanding of meta-learning. On the empirical results in Section 7.1: this is a fair question. Proofs of accelerated convergence always depend on an optimal choice of learning rates. Hence, as our purpose is to verify theoretical predictions, the relevant empirical study is to compare algorithms under optimal learning rates. This can be seen as a comparison of ‘best case performance’; as long as all algorithms are tuned similarly, this is still a fair comparison. On limitations; the main limitation of our work is the assumption of convexity. We discuss this briefly in Section 2, but will expand this discussion in an updated version of the manuscript. If the reviewer would like to see other limitations discussed, please do let us know. --- Rebuttal Comment 1.1: Title: Thanks to the authors for the rebuttal Comment: After reading the other reviews and rebuttals, I feel that the authors have adequately addressed my concerns and I have decided to raise my score from 5 to 6.
Summary: This paper shed an interesting perspective on meta-learning by studying the connection between gradient-based meta-learning and convex optimisation in the single task setting. It shows that meta-gradients contain gradient descent with momentum and Nesterov Acceleration as special cases. Furthermore, gradient-based meta-learning can be understood as a non-linear transformation of an underlying optimisation method. The authors establish the rates of convergence for meta-gradients in convex settings. For meta-learning to achieve acceleration of convergence, some form of optimization is needed. Specifically, it provides the first rigorous proof of BMG with acceleration rate of convergence, and highlights the underlying mechanics that enable faster learning with BMG. Strengths: This paper provides us a novel view on meta-learning by studying the connection between gradient-based meta-learning and convex optimisation. It reveals that gradient descent with momentum and Nesterov Acceleration are special cases of gradient-based meta-learning with different update rules. This paper is of high technical quality. Based on the new understanding for meta-learning as a non-linear transformation of an underlying optimisation method, it establishes theoretical analysis and proof of the convergence rates of meta-gradients in convex settings. The theoretical analysis has insightful implications that we are able to further accelerate the rate of convergence with some form of optimization. As a result, the recently proposed BMG method is proven to achieve an accelerating convergence rate. These results are important and significant, providing us with new deep understanding about gradient-based meta-learning, and theoretical guidance to design fast meta-learning algorithms with update rules. Last, the paper is well organized and clearly written. Weaknesses: This is a theory paper, it could be more impactful if the authors may add more experiment results for large scale problems either in convex optimization or deep learning settings besides ImageNet. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Curious to know what is the computational cost of Optimistic meta-learning compared to SGD in the ImageNet experiment. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: 1. There is no code provided to reproduce the results. 2. The authors do not describe limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review; we are delighted that you liked the paper and found our results important and significant. In this paper, we focused on the theoretical aspect, but we agree that further empirical investigations is an exciting area for future research. We hope that the theoretical insights we have developed can help practitioners develop novel methods that perform better in the large scale regime. On computational cost, the FLOPs for an SGD update is 2N, N being the number of parameters in the network. The Optimistic Meta-Learning optimizer in the ImageNet experiment (Section 7.2) requires 7N FLOPs. As a reference, an optimizer like Adam requires 18N FLOPs per update. --- Rebuttal Comment 1.1: Title: Thanks to the authors for the rebuttal Comment: I’ve read comments from all the other reviewers. Thank you for your rebuttal, and I appreciate that my concerns have been addressed.
Summary: This work discovers the connection between gradient-based meta-learning and convex optimization of the meta parameters. From there, the authors observe that common gradient descent and its variants with momentum are special cases. To match the conventional $O(1 / T^2)$ convergence rate, the authors propose the optimistic version. Strengths: 1. The observation that common accelerated methods can be viewed as meta-learning is interesting. The online convex optimization view presents a new perspective for showing the convergence of the accelerated methods, which could be itself interesting. 2. The proposed optimistic gradient-based meta-learning method is simple, but shows performance gain over vanilla baseline methods. Weaknesses: The major weakness of the analysis comes from the restrictive form of the meta-learner. This makes the analysis not applicable to all neural network based learn-to-optimize methods, while the meta-learning methods applied on learning rate or pre-conditioning matrix are well studied already in the literature. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The figure 1 shows the result of top-1 accuracy of sgd and standard meta learning on resnet50 on imagenet. But to my knowledge SGD can achieve >75% top-1 accuracy on imagenet, can the author elaborate more on what codebase they use and the details of experimental setting for that experiment (e.g., how do you tune learning rate, what is the scheduling etc)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review, we are glad you found the connections we made interesting. We fully agree that the restrictions on the meta-learner are limiting. Unfortunately, this is an inherent limitation we face when using convex analysis, since neural networks are not convex. Hence, a non-convex interaction between the meta-learner and the learner cannot be analyzed exactly in this way. With that said, we would like to point out that our theoretical predictions do apply to any meta-learner, up to first order Taylor series error. For small learning rates, our predictions are accurate, and our ResNet experiment demonstrates that our theory has practical use. Thank you for raising your question on Figure 1. We use the Haiku example launch script, which is available on github. We are not allowed to provide external links in the rebuttal, but please see Appendix C in the supplementary materials for a url link. We only modified this script by changing the optimizer. The baseline we compare against is SGD with a fixed learning rate (a non-adaptive baseline). We tuned the learning via a sweep over the range $[1e-4, 3e-4, 1e-3, 3e-3, 1e-2, 3e-2, 1e-1, 3e-1]$ on the validation set. We report scores for the best performing one. We would like to stress that the point of this experiment is not to propose a new meta-learner that beats state of the art, but to empirically validate that our theoretical predictions (i.e. that an adaptive meta-learner can yield accelerated learning against a non-adaptive learner) hold in the deep learning setting. --- Rebuttal Comment 1.1: Title: Response to Authors' Rebuttal Comment: Thanks for your rebuttal and my concerns are addressed. I hope that the implementations will be made publicly available for reproducibility.
Summary: The submission studies connections between recent advances in convex optimisation and heuristic meta-learning update rules. The provided framework contains standard methods such as heavy ball and Nesterov's momentum as special cases, while also containing rules that correspond to online meta-learning. Bootstrapped meta-gradients are also formalised within this framework. Some of the theoretical results are corroborated by empirical investigations. Strengths: * The submission is well-written and interesting. I can see myself referring back to this paper in future when developing new meta-learning or bi-level optimisation methods. * Framework provides a useful tool for conceptualising and analysing meta-learning algorithms, as well as comparing them with existing (or novel) convex optimisation approaches. * The analysis shows how one can obtain optimal $O(1/T^2)$ rates of convergence by using optimism. E.g., by predicting that the current gradient will be similar to the previous gradient. * Experiments are provided that empirically corroborate the predictions of the theoretical analysis. In particular, the meta-learning variants of momentum and adagrad converge substantially faster than the conventional approaches. Weaknesses: * The analysis is limited to the convex case, which is understandable. However, it would be interesting to at least empirically investigate the non-convex case as well. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: N/A Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: There is no substantial discussion of the limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review, we are glad you found the paper interesting and potentially helpful in your future work! We agree that studying the non-convex case empirically is an exciting area for future research. We take some initial steps in this direction with our ResNet experiment, in which the learner’s problem is non-convex (while the meta-learner’s problem is still convex). We hope that our theoretical insights can help practitioners develop new meta-learning algorithms that perform better at scale in non-convex settings.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Collapsed Inference for Bayesian Deep Learning
Accept (poster)
Summary: The paper presents a new method for calculating Bayesian integrals such as the Bayesian model average (BMA) based on volume computation schemes. Specifically, the authors draw inspiration from a weighted volume computation (WVC) problem. Since the WVC is intractable for common neural networks they approximate it with weighted model integration (WMI) that can compute volumes of literals connected with logical connectivities. Some approximations are used to accommodate WMI solvers, such as approximating a Gaussian likelihood with a triangular distribution, and using the WMI solver on part of the network (usually a few dozens of parameters). Results show superior performance compared to sgd-based Bayesian methods and comparable results to Bayesian last-layer methods. Strengths: Overall I liked the idea of the paper and the novel view it provides for computing Bayesian integrals. Strengths: * Novel and interesting characterization of the BMA as a volume computation problem * Novel way of approximating the BMA and other integrals using WMI * The authors suggested practical and efficient ways to accommodate WMI to the Bayesian learning framework * The use of a motivating example throughout is a good idea * For the most part the paper is written clearly Weaknesses: There are several issues I would like the authors to address: **Method.** * The assumption of a uniform posterior is a bit odd and I didn't understand the intuition as presented in L199 in the main text and L510 in the appendix. It is very reasonable to assume that the posterior is not uniform, even when evaluated only on part of the network. A proper justification or intuition for this choice is missing. Also, the good empirical results can be a byproduct of being Bayesian on only part of the network as evident in [1]. **Experiments.** * In my opinion, the authors exaggerate in how they present their method performance (e..g, "significant improvements", "new state of the art"). First, the method is not compared against all relevant Bayesian methods so it cannot be considered as SoTA. Second, the improvements are generally mild, and when factoring Bayesian last-layer methods (which are for some reason presented in the Appendix), the method is only comparable to them. Furthermore, with more modern NNs CIBER performance is usually inferior to several baseline methods. * It is not clear why the authors didn't present the performance of Bayesian last layer methods on the CIFAR datasets. In my opinion, it's as important as presenting SGD-based Bayesian methods. * A quantification of the method complexity (e.g., wall-clock time) is missing. To me, it is not clear how expensive WMI solvers are and how it grows with the number of parameters. I would have expected to see a comparison to baseline methods w.r.t that aspect as well. * Experimental details are severely lacking (e.g., was there a validation split? What were the hyper-parameters? Did you do a grid search over hyper-parameters? If so, on what values? ). * Information about how to choose some of the constants such as $\alpha, l_i, u_i$ is missing. Have you searched for possible values? In general, how did you set their values? * Code wasn't added to the submission. When factoring this bullet and the last 2 bullets, in my opinion, the results in this paper are not reproducible and this harms this paper's score. **Clarity of presentation.** * I think a short background on the main concepts of SMT will make the paper self-contained and easier to read. * In Fig. 2 it would help to plot the training points as well. [1] Sharma, M., Farquhar, S., Nalisnick, E., & Rainforth, T. (2023, April). Do Bayesian Neural Networks Need To Be Fully Stochastic?. In International Conference on Artificial Intelligence and Statistics (pp. 7694-7722). PMLR. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: * Perhaps I do not understand something, since some SMT formulas are built from logical conjunction with conflicting conditions, such as the Relu one, how can they be satisfied? There is no $W$ that satisfies both conditions for the same $x$, no? * Why did you approximate the Gaussian likelihood with a triangular distribution instead of a truncated Taylor series for the exponential function? * Have you tested your method's sensitivity to temperature scaling? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors did not address the limitations of their method. Please see the weaknesses/questions sections. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for appreciating our work for a novel view of computing Bayesian integrals, a practical and efficient framework, and a clear presentation. In what follows, we will address your concerns, with the references put in the general response due to the character limit. [uniform approximation to posterior] Please see our answer in the global response. [fully/partially Bayesian] The idea of partially stochastic neural networks in [4] is to partition the weights into two sets: a deterministic set where the weights are assigned to be MAP estimation and a stochastic set where the weights are stochastic and approximated using existing Bayesian deep learning methods such as Laplace approximation, variational inference or SWAG. This is different from CIBER where the neural network is fully stochastic instead of being partly stochastic. CIBER also partitions the weights into two sets: a sampling set where weights are approximated by SWAG and a collapsed set where the weights are approximated using our proposed closed-form inference. A future research direction can be to combine both by applying the closed-form inference to the stochastic set in the partially stochastic neural networks to study if the observations on stochasticity in [4] still hold. [baselines] In the empirical evaluations, we compare with baselines ranging from the mostly related ones using samples from SGD trajectories for posterior approximations to variational-inference-based ones, deep ensemble, MC dropout, etc, with strictly more baselines than [8]. The Bayesian final layer method [7] is included in both the main paper in Table 2 for log-likelihood and in the appendix in Table 6 for RMSE as direct comparisons to the numbers reported in [8], where the Bayesian final layer method is only reported in regression tasks and not the classification tasks. [runtime complexity] We provide the runtime of CIBER on dataset CIFAR-100 dataset in the classification experiment with the three neural network models being VGG-16, PreResNet-164, and WideResNet28x10 respectively as below, to showcase the empirical computational complexity of CIBER. We compare our runtime with SWAG which is known to be a simple and scalable Bayesian deep learning approach summarized in the table below. The runtime of CIBER consists of two main processes: 1) training time, which is exactly the same as the runtime of SWAG, and 2) WMI solving time. The table shows that CIBER is almost as efficient as SWAG and that WMI solving brings little computational overhead due to the efficiency of the WMI solvers. | MODEL | VGG-16 | PreResNet-164 | WideResNet28x10 | |-----------------|:----------:|:-------------:|:---------------:| | Training / SWAG | 2h3m59s | 6h3m18s | 29h39m28s | | WMI Solving | 11m43.134s | 11m20.638s | 19m1.076s | [experimental details/code] We put the experimental details of both the regression and the classification experiments in Section C.2 and Section C.3 respectively. The way that the hyper-parameters including learning rates and weight decays are chosen exactly follows [3] as mentioned in Appendix. The hyper-parameter $\alpha$ as described from Line 142 to Line 145 is 2.45. The parameters $\ell, u$ are the bounds of the uniform posterior defined by the minimum and the maximum of the weight samples drawn from the SGD trajectories respectively. We’ll include these details in the next version to make it more self-contained. We’ve also provided our code using an anonymous link and sent the link to AC for your reference. [presentation] We’ll include the suggested changes in the presentation in the next version. [Q1] An assignment $\mathbf{x}$ satisfies an SMT formula $\Delta$ defined over variables $\mathbf{X}$ if the formula $\Delta$ is evaluated to be true after substituting the variables by their assignments. For example, given an SMT formula encoding a ReLU, $( (W · 1) > 0 \Rightarrow Z = W · 1) \land (W · 1 ≤ 0 \Rightarrow Z = 0)$, the assignment $(W = 3, Z = 1)$ satisfies the formula since both $(W · 1) > 0 \Rightarrow Z = W · 1)$ and $(W · 1 ≤ 0 \Rightarrow Z = 0)$ are evaluated to be true in this conjunction. Another satisfying assignment would be $(W = -1, Z = 0)$. While an assignment $(W = 3, Z = -1)$ does not satisfy $\Delta$ since $(W · 1) > 0 \Rightarrow Z = W · 1)$ is evaluated to be false, even though $(W · 1 ≤ 0 \Rightarrow Z = 0)$ is evaluated to be true, due to the conjunction of both. We’ll include the definitions and intuitive explanations of the satisfaction of a logical formula in the camera-ready version for readability. [Q2] One message we would like to deliver in this work is that a closed-form inference, even formed by low-order polynomials, is able to deliver accurate approximation, as shown in Figure 1. To form an approximation to the Gaussian distribution, the Taylor series is not applicable since mathematically it is for approximating the function locally near a certain point instead of approximating the whole distribution. The approximation of Gaussian using a triangular distribution in our work is common in numerical analysis and has been adopted in various applications [3,4]. [Q3] We agree that it will be interesting future work to study the combination of CIBER and temperature scaling. --- Rebuttal Comment 1.1: Title: Response to Reviewers Comment: I thank the authors for the response. Some things are more clear now such as the ReLU satisfiability and the runtime complexity. I wasn't convinced that indeed this method is prefered over baseline methods, at least in terms of the empirical results presented in the paper. This is fine though as I mainly judge this method by its novelty and not the empirical evaluation. One thing I do not agree with the authors about is the justification brought here in the rebuttal for using uniform approximation to the posterior. While I do understand the computational constraints forcing the authors to make that choice, I do not think that it holds in terms of a formal justification. Under discrete sampling, the uniform weight makes sense as we assume that the samples reflect high density regions according to the posterior (or its approximation). This assumption does not hold in the case of a continuous uniform distribution. For instance, if the posterior is Gaussian and if we sample it regularly enough, then the predictive distribution will be approximately Gaussian as well. However, this will not be the case with a continuous uniform posterior distribution. Overall, considering the authors response and the reviewers comments I decided to keep my original score. --- Reply to Comment 1.1.1: Title: Reply Comment: We would like to thank you for appreciating the novelty of our work. However, the discussion of convergence in the limit entirely misses the point of the paper. You cannot achieve low variance and low bias in Bayesian deep learning. The key point of our paper is that the biased uniform distribution is often more accurate than other methods in the finite sample case. Of course our proposal is biased and that means that in the limit of infinite samples, the uniform approximation is not formally justified. But in the finite sample case it is, as we show. That is the entire point of the paper!
Summary: The paper provides a closed-form approximation for the posterior predictive distribution in Bayesian deep learning, in both regression and classification. The paper is overall clear and quite well written. The theory seems reasonable, however a theoretical analysis on the approximation error, depending either on the number of model parameters or on the number of predictive samples, is not provided. Currently, it is not particularly clear to me the sensitivity of the results to the number of samples. As Monte Carlo will eventually win for a large number of samples, I believe it is important to understand up to what point the proposed method is better. Strengths: The paper is quite well written. It attempts to provide closed-form approximation for the posterior predictive distribution in Bayesian deep learning, which is a meaningful efforts considering that Monte Carlo samples can be expensive. The approximations proposed by the paper seem reasonable. The paper proposes a number of experiments and benchmarks against several methods. Weaknesses: - The paper lacks of a sensitivity analysis to the number of samples. - It would be great if the error for some of the approximations could also be expressed analytically. For example, can we say anything about the error of a triangular approximation? - If we strip the paper of its motivating terminologies (WMI and collapsed samples), seems similar in spirit to partially stochastic Bayesian inference, with a uniform posterior approximation and a triangular approximation of the likelihood to work out a closed-form approximation for the predictive. There does not seem to be a direct connection/comparison with this part of the literature. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Some more granular comments below. - I find Figure (2) not very convincing. The authors claim that the predictive uncertainty using 10 samples from CIBER is closer than the one using 10 sample from HMC to the benchmark approximation given by 2k samples from HMC. First of all, the blue prediction from CIBER seems further off than HMC with 10 samples. Consequently, the uncertainty interval is wider. While it is positive that the larger uncertainty is adjusted to to the wider prediction error, to me HMC-10 seems closer to HMC-2000 than CIBER-10. It would also be interesting to see what happens with a few more samples, say 20 or 30, which is still low but often possible. Also, I assume the 10 samples were taken in the asymptotic regime, that is after HMC converged? - Also in Figure 3, the authors use 5 samples only. As this is very low, it would be interesting to see at what point MC becomes better. E.g. is it 7 samples? 10? 100? 1000? This would give the reader a more clear perspective of when an algorithm should be used instead of the other. - Line 128: It would be good to expand on the meaning of this definition. The example below clarifies a bit, but, being the terminology non-standard, it is still not immediate to understand what "satisfaction of an SMT formula" means, and the idea underlying the definition of WMI. - I cannot find in the appendix the piecewise polynomial approximations for classification mentioned at line 213.
 - In proposition 7, the authors derive expressions for predictive density and predictive mean. What about other statistics, like variance and entropy? Can these be derived in closed form as well? If so, it would be good to provide an expression. - To me the definition of collapsed BMA remind a lot partially stochastic neural networks, that is when Bayesian inference is performed on part of the network only. In fact, give a posterior distribution p(w|D), where w=[w_c, w_s], you can always decompose p(w|D)=p(w_c|w_s, D)p(w_s|D). If you approximate p(w_s|D) with an empirical distribution at a bunch of samples, there you have collapsed BMA. In partially stochastic neural networks, usually p(w_s|D) is just a single Dirac delta rather than an ensemble. In this paper, instead, w_c are taken along the trajectory of SGD, similarly as in SWAG. Perhaps it would be worth to explain the connection. - In the conditional posterior approximation, how are the lower and upper limits of the uniform distribution defined? - The paper mentions [Kristiadi et al. 2022] a few times. In these paper, the authors also provide a closed-form approximation for the predictive distribution in classification, i.e. the multi-class probit approximation. Since their rationale is similar as for this paper (closed-form approximations in a low-sample regime), it would be good to see a comparison. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: There does not seem to be much discussion about limitations. The authors could better discuss in which scenarios their approximation is better or worse than other methods. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for appreciating our work for its clear presentation, solving a meaningful problem, reasonable approximation, and thorough empirical evaluations. In what follows, we will address your concerns, with the references put in the general response due to the character limit. [Q1] Figure 2 demonstrates that for overconfident ReLU NNs, using HMC with 10 samples does not help the overconfidence issue much: its confidence intervals are irregular and often too narrow, and many predictions have the shaded region much narrower than those in Figure 2c. While in Figure 2b, the shaded region is wide and for the predictions that are more off from the ground truth, the shaded region is even wider meaning that those predictions come with less confidence, showing that using a uniform approximation to posterior helps estimate the uncertainty in a consistent way. Still, we provide a quantitative analysis of the number of samples. Please see our response to [Q2] below. [Q2] We perform a comparison between CIBER and the MC method for the Bayesian linear regression setting as suggested by the reviewer to see how many samples the MC method needs to match the performance of CIBER with the result presented in Figure A.2 in the rebuttal pdf. The performances of both approaches are evaluated using KL divergence between the estimated posterior distribution and the ground-truth one, averaged over 10 trials. In Figure A.2, the dashed green line shows the performance of CIBER with 50 samples and the blue curve shows the performance of MC with an increasing number of samples. As expected, the MC method yields lower KL divergence as the number of samples increases; however, it takes more than 100 samples to match CIBER, indicating its low sample efficiency and that developing efficient and effective inference algorithms such as CIBER for estimating BMA is a meaningful question. [Q3] Please see [satisfaction to SMT formulas] in the general response. [Q4] For classification, a piecewise polynomial approximation with the polynomial degree being three is applied to the sigmoid function which we visually present in Figure A.3 in the rebuttal pdf, where again is obtained by minimizing the L2 distance. We’ll include the explicit expression of the piecewise polynomial approximation in the next version. [Q5] For variance, we derive the closed-form expression in Proposition B.1 in the rebuttal pdf; this is a direct generalization of our results because variance can be derived from the second moment of the distribution which is an expectation over $y^2$, a polynomial. While for entropy, the existing WMI solver does not allow the log operation and thus there’s no direct result. We do think it is an interesting future research question on what statistics can be exactly/approximately solved by WMI solvers and how such results can motivate the derivation of new algorithms for accurate and reliable inference. [Q6] The idea of partially stochastic neural networks [4] is to partition the weights into two sets: a deterministic set where the weights are assigned to be MAP estimation and a stochastic set where the weights are stochastic and approximated using existing Bayesian deep learning methods such as Laplace approximation, variational inference or SWAG. This is different from CIBER where the neural network is fully stochastic. CIBER also partitions the weights into two sets: a sampling set where weights are approximated by SWAG and a collapsed set where the weights are approximated using our proposed closed-form inference. A future research direction can be to combine both by applying the closed-form inference to the stochastic set in the partially stochastic neural networks to study if the observations on stochasticity in [4] still hold. We’ll this discussion and the related work on the partially stochastic neural networks including [4] in the camera-ready version. [Q7] The lower and upper limits for each weight are defined by the minimum and the maximum of the weight samples drawn from the SGD trajectories respectively. [Q8] The closed-form approximation in [2] is motivated by a binary probit approximation when the logit conforms to a Gaussian distribution, that is, the posterior predictive is a Gaussian, which is exactly the same setting we have for the synthetic classification in order to provide a quantitative analysis of the approximation when the BMA ground truth is available. In this setting, the probit approximation is almost exact. Still, such approximation is specific to classification while our proposed approximation using a reduction to weighted volume computation is applicable to layers with various activations as shown in experiments. Also, it assumes mean-field approximation that ignores correlations induced by activations such as ReLU while the ReLU can be naturally encoded in the weighted model integration framework. [piecewise-polynomial approximation] The piecewise polynomial approximation enjoys theoretical guarantees on approximation error by the Stone-Weier theorem [4], stating that any continuous functions can be uniformly approximated by polynomials up to arbitrary precision, which holds for high dimensional space. That is, functions $f$ including multivariate Gaussian can all be effectively approximated by piecewise polynomials, with theoretical guarantees that for arbitrary $\epsilon$, there exists a piecewise polynomial $p$ such that $\parallel f - p \parallel < \epsilon$ holds, where $\parallel \cdot \parallel$ denotes the supremum norm. In our work, we choose to approximate Gaussian using the triangular distributions which is common in numerical analysis and has been adopted in various applications [5,6]. --- Rebuttal Comment 1.1: Comment: I thank the reviewers for the additional experiments and the explanations. As a result, I have increased my score to 6.
Summary: The paper proposes to use techniques from weighted volume computation to deterministically marginalize over (a subset of) the weights in a BNN rather than sampling them when computing the predictive posterior. The experiments find the method to perform competitively with some standard baselines from the literature on UCI and CIFAR benchmarks I think this is a nice methodological contribution, although I wish the experiments reported the computational cost and explored the impact of only being able to select a subset of the weights for volume computation (i.e. more detailed analysis of the limitations). So overall I would lean towards acceptance. Strengths: * Using volume computation for BNNs is new as far as I am aware, and I find work that connects different sub-fields in the literature generally quite valuable. * The paper is very well written, I found the low-dimensional running example used throughout the paper very effective. * The method seems to perform well in the experiments. Weaknesses: * The paper mentions that applying volume computation over all weights would not be feasible, but does not report how expensive it is on the subset that it select. * Similarly, I would have liked to see some analysis of the compute time-performance tradeoff as one varies the number of weights that are used in volume computation rather than sampled. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: I’d mainly like to see some discussion on the points mentioned in the weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The computational cost is mentioned but not further substantiated by runtime figures and comparisons. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply thank the reviewer for appreciating our work for its novelty, clear presentation, and thorough empirical evaluations. We truly appreciate that you find the connection between sub-fields proposed in our work to be quite valuable. In what follows, we will address your concerns. [runtime complexity] We provide the runtime of CIBER on dataset CIFAR-100 dataset in the classification experiment with the three neural network models being VGG-16, PreResNet-164, and WideResNet28x10 respectively as below, to showcase the empirical computational complexity of CIBER. We compare our runtime with SWAG which is known to be a simple and scalable Bayesian deep learning approach summarized in the table below. The runtime of CIBER consists of two main processes: 1) training time, which is exactly the same as the runtime of SWAG, and 2) WMI solving time. The table shows that CIBER is almost as efficient as SWAG and that WMI solving brings little computational overhead due to the efficiency of the WMI solvers. | MODEL | VGG-16 | PreResNet-164 | WideResNet28x10 | |-----------------|:----------:|:-------------:|:---------------:| | Training / SWAG | 2h3m59s | 6h3m18s | 29h39m28s | | WMI Solving | 11m43.134s | 11m20.638s | 19m1.076s | [runtime-performance tradeoff] We perform analysis on the runtime-performance tradeoff as suggested by the reviewer, with results presented as Figure A.4 in the rebuttal pdf. The analysis is carried out in the Bayesian linear regression setting where the ground-truth posterior predictive distribution is available. The performance of CIBER is evaluated using the KL divergence between the estimated posterior predictive distribution and the ground truth. As we increase the size of the collapsed set, the BMA integrals are defined over a higher dimension, leading to longer WMI solving time; in the meanwhile, the accuracy brought by the closed-form approximate inference over the collapsed set leads to lower KL divergence. We'll include this analysis in the camera-ready version. --- Rebuttal Comment 1.1: Comment: Thank you for providing runtimes and figure A.4. I think a similar figure for some NN experiment(s) with e.g. NLL as the performance measure could be helpful for the camera-ready paper (I understand that this may not have been doable over the rebuttal period). I would also suggest adding a secondary x-axis to the figure indicating the number of weights considered for collapsed inference. Overall and in light of the other reviews I remain with my score.
Summary: The authors propose to tackle the intractable problem of Bayesian model averaging (BMA) in Bayesian neural networks by re-formulating it as “collapsed BMA”, where a small number of “collapsed” samples from a subset of the parameter space (e.g., the last layer) is equipped with a posterior conditioned on the remainder. The key contribution is to cast the marginalization over this conditional posterior as a weighted volume computation (WVC) problem, computing integrals over feasible regions of arithmetic constraints that are to be inferred from binary ReLU activation patterns. As such enumeration of constraints is computationally infeasible, a weighted model integration (WMI) framework is used to solve BMA in a multi-step procedure of encoding the involved distributions and calling existing WMI solvers. The proposed algorithm competes favorably against a number of baselines. Strengths: * [S1] **Significance**. The authors address an important problem with a focus on practical application. * [S2] **Originality**. The proposed solution seems novel and creative. * [S3] **Structure**. Structural elements like enumerated steps and questions are helpful to follow the argumentation. * [S4] **Intuition**. The authors make an effort to build intuition with several examples and illustrations. Weaknesses: * [W1] **Storyline**. While the authors clearly try to establish a coherent structure with well-motivated steps, the many components make it hard to follow and not lose sight of the key idea. * The main topic, collapsed inference, is not addressed until page 5 – not being familiar with WVC/WMI, I felt somewhat lost in notation and descriptions of approximate procedures until the concept is even formally introduced. * I would recommend to have Def. 6 early on, and to map the 4-step procedure in Section 3 directly to the collapsed BMA problem. * I It would be helpful to provide more intuition around all the notation around WVC/WM – to readers mainly familiar with Bayesian inference and Bayesian deep learning, these concepts might appear quite foreign. * I Also, if space constraints allow it, I would move the central CIBER algorithm into the main paper. * [W2] **Simplifications**. The concept seems to rely on a number of simplifying assumptions that I feel are not properly defended. * I Encoding the posterior predictive distribution with polynomial densities * I Choosing the conditional posterior, a key object in your proposal, to be uniform with an explanation that reads like “better than not doing it at all” * [W3] **Evidence**. The empirical results do not quite support the claims made. * I Some of the figures/examples do not seem convincing to me, see Q2–Q4. * I Your claim of “set[ting] a new state of the art” is not thoroughly funded in my opinion. * I cannot find details on how the architecture was chosen for the large datasets. * Baselines: In the transfer learning task, the comparison seems limited (SWA and SWAG are quite similar, and I would not consider SGD a “strong” baseline). Also, I miss Laplace approximation and especially deep ensembles as popular inference schemes that arguably operate in the few-sample setting as well. * The methods are evaluated on small-scale UCI tasks, and the results are not entirely conclusive. * I miss an ablation w.r.t. the number of collapsed samples, which should be an important hyperparameter of your method. * [W4] **No code**. With a novel method of computing BMAs, it would be helpful to have a look at the code. Also, it would allow to inspect implementation details of the experiments. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * RE [W2] * [Q1] Can you elaborate on the appropriateness for the approximation with polynomial densities, especially for (a) the Gaussian case in high dimensions, and (b) the classification case? * [Q2] Have you explored any alternatives to the uniform distribution for the conditional posterior? The empirical evidence provided in Fig. 2 is also not clear to me: (a) How can HMC and CIBER use the “same 10 samples” when HMC recovers the true posterior after convergence, and CIBER samples from the SGD trajectory? (b) I fail to see how Fig. 2b is closer to 2c than 2a. * RE [W3] * [Q3] Fig. 3: Did you repeat the sampling process? If not, the superiority of CIBER established from 5 samples might be a lucky shot. * [Q4] Classification example (l. 240): Kindly elaborate on this, I do not understand the origin of the presented numbers. * [Q5] l. 37: How does collapsed sampling limit to a subset of *variables*? From your descriptions, this should be a subset of *weights*. * [Q6] l. 99: Should one of the inequality signs be a strict “greater” or “lesser”? * [Q7] l. 161: CIBER claims to resolve the issue of WMI-encoding non-ReLU networks. How is this achieved? Proposition 7 still contains * $\Delta_{ReLU}$. * [Q8] Eq. 2: The sum is over tuples $(w_s, q)$ but $w_s$ never appears in the summand and $q$ is the same for all $w_s$. Can you make more explicit how the running index enters the sum? On a related note, I do not understand the definition of $W_s$ (l. 183) because I don’t quite see how $w_s$ and $w$ are connected. * [Q9] l. 283: Could you elaborate on what you mean by “greater variance indicates that the weight is prone to have greater uncertainty and thus one might want to perform more accurate inference over it”? * [Q10] l. 314: Should the number of collapsed samples not depend on network architecture rather than number of classes in the final layer? — Minor remarks * l. 31: I would not agree that “existing methods mainly focus on MCMC” in BNN; approximate schemes like deep ensembles are quite popular. * l. 39: The claim that “collapsed samplers are effective at variance reduction in graphical models” seems irrelevant in the context of your proposal. * l. 62: “it can be risky to base inference on a single neural network model” sounds fairly hand-wavy. * l. 79 (example 2): The notation is mixing up bold and non-bold w. * l. 148: How is that the “uncertainty” of the prediction? * l. 183: The definition of $W_s$ seems overly complicated, why not just say “given a set of parameter samples $W_s$”? * l. 265: What is DPLL? * l. 272: I would recommend removing the “Boston” dataset due to its racism issues. * l. 300: There seems to be an “and” missing before “four”. * l. 302: “root mean-squared error”, not “rooted-mean-squared-errors”. * Some references could be added in * l. 35/36 (cutset / Rao-Blackwellised samplers) * l. 51 (HMC) * l. 110/111 (“various” solvers; single reference) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: I miss a discussion of the limitations, especially since there are numerous approximations and heuristics involved. Societal impacts (or rather, the absence of such) are indeed mentioned. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly thank the reviewer for appreciating our work for its significance, originality, novelty, and structures. We will address your concerns below, with references in the general response. [Questions on W2, W3] [Q1] The appropriateness of piecewise polynomial approximation is justified by the Stone-Weier theorem [1], stating that any continuous functions can be uniformly approximated by polynomials up to arbitrary precision, which holds for high dimensional space. That is, functions $f$ including multivariate Gaussian and softmax can all be effectively approximated by piecewise polynomials, with theoretical guarantees that for arbitrary $\epsilon$, there exists a piecewise polynomial $p$ such that $\parallel f - p \parallel < \epsilon$ holds, where $\parallel \cdot \parallel$ denotes the supremum norm. [Q2] We agree that to explore what are the alternative approximations to the posterior is interesting for future work; in our work, it is chosen to be uniform, and we provide a further justification for this choice in the general response. In Figure 2, 10 samples are drawn from a posterior distribution where HMC uses them for inference while CIBER first applies a uniform approximation to them before the closed-form inference. This figure demonstrates that for overconfident ReLU NNs, using HMC with 10 samples does not help the overconfidence issue much: its confidence intervals are irregular and often too narrow, and many predictions have the shaded region much narrower than those in Figure 2c. While in Figure 2b, the shaded region is wide and for the predictions that are more off from the ground truth, the shaded region is even wider meaning that those predictions come with less confidence, showing that using a uniform approximation to posterior helps estimate the uncertainty in a consistent way. [Q3] We repeat the sampling for the synthetic Bayesian linear regression experiment for 10 trials as suggested by the reviewer. The resulting comparison of the predictive distributions is presented in Figure A.1 in the rebuttal pdf, where the estimation by CIBER is much closer to the ground truth posterior predictive distribution than the Monte Carlo method. Further, the averaged KL divergence between the ground truth and CIBER is 0.069 while the one for MC estimation is 0.130, again indicating that CIBER yields a better BMA approximation in the few-sample setting. [Q4] Starting from Line 240, we aim to provide a comparison between CIBER and the Monte Carlo method in the classification setting. A previous work [2] proposes that such comparison on BMA approximation can be achieved by comparing how close an estimation is to the ground-truth integral as defined in Line 241. We follow the setting in [2] and the results show that the integral estimation $I_c$ by CIBER is closer to the ground-truth integral $I$ than the MC estimation $I_{\mathit{MC}}$. [Q5] We mention “limiting sampling to a subset of variables” only in Line 37 in the context of the collapsed sampler in the probabilistic graphical model literature and in the remaining of the paper, it is mentioned as the sampling of parameters in the context of Bayesian deep learning. [Q6] We’ll fix this typo. [Q7] This issue is resolved by the sampling part in the collapsed samples. Specifically, we propose to use collapsed samples to combine the strengths from two worlds: the scalability and flexibility from sampling and the accuracy from WMI solvers, where flexibility refers to the fact that given a neural network that might contain layers not amenable to WMI encoding, we can sample a subset of network weights including those in such layers, and further condition the network on the sample, to result in a sub-network that is amenable to the WMI encoding. [Q8] As mentioned in Definition 6, $(\mathbf{W}_s, \mathbf{W}_c)$ is a partition of parameters $\mathbf{W}$, meaning that $\mathbf{W}_s \cup \mathbf{W}_c = \mathbf{W}$ and $\mathbf{W}_s \cap \mathbf{W}_c = \emptyset$. With lower case denoting the assignment, it holds that $\mathbf{w}_s \cup \mathbf{w}_c = \mathbf{w}$ and $\mathbf{w}_s \cap \mathbf{w}_c = \emptyset$, that is, $\mathbf{w}_s$ is implicit in Equation 2. We provide an expanded version of this equation in Section B in the rebuttal pdf to clear this confusion. [Q9] How to choose the collapsed parameter set is an open question in the design of collapsed samplers. One heuristic we provide is to choose the weight parameters whose samples from the SGD trajectory have high variance. This is because the lower the variance a weight has in its samples, the closer the weight is to a deterministic variable, and there is no need to maintain a distribution over deterministic weights. Thus, we choose the weights with high variance in their samples instead since they are further from being deterministic than the others. [Q10] The numbers in Line 314 are the size of the collapsed parameter sets; for the number of collapsed samples, it is independent of the network architecture and the number of classes, and we keep it the same as the number of samples that the baseline SWAG [3] for a fair comparison. [On W1] We agree that the suggested change in the storyline can help readers have a full picture of the collapsed inference scheme before introducing the connection between BMA and WVC, which is easy to make. We would also like to point out that, which part is easier or more important, the collapsed inference scheme vs. the reduction from BMA to WVC, depends on the reader's background and that other readers might prefer the current presentation. [On W4] We have provided our code using an anonymous link and have sent the link to AC. [fianl remark] We hope that our responses address your concerns and we kindly hope that you would consider raising the score accordingly. We feel that the foundational contribution of this paper has arguably not been fully appreciated, at least not incorporated into the scores, and should be given more weight. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you very much for addressing my concerns. Please find below the replies to points that remain open from my perspective. [Q1] My issue was in particular with using a triangular distribution, i.e., a very small number of polynomials, to approximate a Gaussian distribution, a concern also raised by reviewers PnkR and cv4P. I remain skeptical as to how well this approximation will work, especially in higher dimensions. As mentioned several times in the discussion, such approximations should be analyzed quantitatively and discussed in the limitations. [Q2] Thank you for the clarification. However, I don’t quite follow your argument here – if computing an expectation w.r.t. an arbitrary posterior density were equivalent to that w.r.t. a uniform weight distribution, we would hardly face the problem of Bayesian inference at all. Regarding Fig. 2, I see what you mean, but I still disagree with CIBER being “closer” to the 2k-HMC. 10-HMC might indeed be somewhat overconfident but it is not clear to me that CIBER’s being much less confident is more justified – in particular, the wider confidence interval of CIBER should be considered in the light of its poorer mean approximation. [Q4] Thank you for the clarification. I would strongly recommend to include all the computations (as well as training details, as mentioned by reviewer cv4P) in the supplementary material or provided code. [Q7] I would recommend to make this point, which is certainly quite relevant, more explicit. [Q10] My apologies, I was indeed referring to the size of the collapsed sets, not the number of samples. A follow-up question here: Isn’t using 10 (100) weights in the last layer for CIFAR10 (CIFAR100) simply equivalent to using all (rather than the most variable) weights in the last linear layer? [W1] That is quite true. I was speaking from the perspective of someone mainly interested in Bayesian inference, with WVC only a means to an end, which might apply for others in your target audience. --- Reply to Comment 1.1.1: Comment: We're more than happy to provide further clarifications. [Q1] We want to repeat the fact that we know this is a highly biased approximation, intentionally. The main point of our paper is that we choose to have bias over variance and that this works well given the collapsed inference that is then possible. Experiments show that the bias is acceptable as we get strong empirical performance. [Q2] We never mean they are equivalent; in most practical Bayesian deep learning settings, there are no explicit posteriors, and thus approximations are required. The key point of our paper is that the uniform distribution as approximations to posteriors is often more accurate than other methods in the finite sample case, as shown in the regression and classification experiments. [Q4, Q7] We have provided the code. We'll further augment more training details and the suggested changes in the next version. [Q10] The number of total weights at the last layer is n*m, where n is the number of its inputs and m is the number of classes. Thus the weights we choose are subsets of the last-layer weights. We hope these convince the reviewer to change their recommendation towards an accept, and are happy to answer any more questions.
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their insightful comments and valuable suggestions, and for deeming our work to be well written (all reviewers) with helpful intuitions (eHBs, cv4P) while presenting a novel and creative view (eHBs,cv4P,e8p8) to an important problem (eHBs,PnkR) that builds a valuable connection between sub-fields (e8p8). [rebuttal pdf / code] We’ve attached a rebuttal pdf for the figures. Additionally, we provide our code for all experiments using an anonymous link and have sent the link to AC. [uniform approximation to posterior] We would like to provide a further justification for the uniform approximation to posterior using a comparison with the Monte Carlo method as follows. Given samples from a posterior $\mathbf{w_i} \sim p(\mathbf{w}\mid \mathcal{D})$ , a Monte Carlo sampling procedure estimates the posterior predictive distribution by aggregating over a finite number of samples, i.e., $p(y \mid x, \mathcal{D}) = \frac{1}{N} \sum_{i = 1}^n p(y \mid \mathbf{w_i}, x)$, which is equivalent to an expectation of the predictive $p(y \mid \mathbf{w}, x)$ under a discrete uniform distribution $\mathcal{U}(\mathbf{w_1}, \cdots, \mathbf{w_n})$, that is, $p(y \mid x, \mathcal{D}) = E_{\mathbf{w} \sim \mathcal{U}} [p(y \mid \mathbf{w}, x)]$. In CIBER, instead of aggregating over a finite number of weight samples using a discrete uniform distribution, we consider a continuous uniform distribution $\mathcal{U}^\prime[ \mathbf{w_{\mathit{min}}}, \mathbf{w_{\mathit{max}}}]$ where the weight limits are estimated by the samples, to estimate the predictive posterior using a continuous ensemble, i.e., $p(y \mid x, \mathcal{D}) = E_{\mathbf{w} \sim \mathcal{U}^\prime} [p(y \mid \mathbf{w}, x)]$, which can be solved exactly using our propose reduction to WMI problems. [satisfaction to SMT formulas] An assignment $\mathbf{x}$ satisfies an SMT formula $\Delta$ defined over variables $\mathbf{X}$ if the formula $\Delta$ is evaluated to be true after substituting the variables by their assignments. For example, given an SMT formula $X_1 + X_2 > 1$, the assignment $(x_1 = 3, x_2 = -1)$ satisfies the formula since $3 + (-1) > 1$ is evaluated to be true, while the the assignment $(x_1 = 0, x_2 = -1)$ does not satisfy the formula since $0 + (-1) > 1$ is evaluated to be false. The WMI problem intuitively defines a conjunction of weighted polytopes where the polytopes are defined by the SMT formulas and the weights are defined by the per-literal weights, and the task of WMI amounts to compute the weighted volume of these polytopes by integrating all the weighted assignments that satisfy the SMT formulas. We agree that these terminologies, which might be common in the logic community, are not familiar to many readers. We’ll include their definitions and intuitive explanations in the camera-ready version. [limitations] In this work, we draw the connection between BMA and WVC which inspires us to provide closed-form approximation to BMA integrals using WMI solvers for accurate inference. Still, the WMI encoding of neural network models is only applicable to layers that can be expressed as SMT formulas. Also, the current WMI solvers are limited to polynomial weights, and thus the reduction to WMI problems is applicable to piecewise polynomial weights. This limitation might be alleviated in the future by the development of new WMI solvers that are flexible in the weight function families. We’ll include this discussion of limitations in the camera-ready version. [References] [1] De Branges, Louis. "The Stone-Weierstrass theorem." Proceedings of the American Mathematical Society 10.5 (1959): 822-824. [2] Kristiadi, Agustinus, Runa Eschenhagen, and Philipp Hennig. "Posterior Refinement Improves Sample Efficiency in Bayesian Neural Networks." Advances in Neural Information Processing Systems. [3] Maddox, W. J., Izmailov, P., Garipov, T., Vetrov, D. P., and Wilson, A. G. A simple baseline for Bayesian uncertainty in deep learning. Advances in Neural Information Processing Systems, 32:13153–13164, 2019. [4] Sharma, Mrinank, et al. "Do Bayesian Neural Networks Need To Be Fully Stochastic?." International Conference on Artificial Intelligence and Statistics. PMLR, 2023. [5] Scherer, William T., Thomas A. Pomroy, and Douglas N. Fuller. "The triangular density to approximate the normal density: decision rules-of-thumb." Reliability Engineering & System Safety 82.3 (2003): 331-341. [6] Karsh, P. K., T. Mukhopadhyay, and S. Dey. "Stochastic low-velocity impact on functionally graded plates: Probabilistic and non-probabilistic uncertainty quantification." Composites Part B: Engineering 159 (2019): 461-480. [7] Riquelme, Carlos, George Tucker, and Jasper Snoek. "Deep Bayesian Bandits Showdown: An Empirical Comparison of Bayesian Deep Networks for Thompson Sampling." International Conference on Learning Representations. 2018. [8] Izmailov, Pavel, et al. "Subspace inference for Bayesian deep learning." Uncertainty in Artificial Intelligence. PMLR, 2020. Pdf: /pdf/a7a1b3a3af6b7335dff0f1be1412924e5c6080ba.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Unified Conditional Framework for Diffusion-based Image Restoration
Accept (poster)
Summary: The paper proposes a new framework for diffusion-based image restoration. The underlying architecture is based on that of "DvSR" [49], in which there is a deterministic predictor that does an initial restoration, used in conjunction with a probabilistic diffusion denoising network that applies the diffusion reverse process to estimate the additive residual to the initial prediction, that can be combined to get the final restoration. The novelty is the change in the residual denoiser network (itself based on SR3 [42]). Instead of using spatially invariant convolutions that are fixed after learning, it employs the BPN [50] idea of having basis convolutional kernels that can be linearly combined at different spatial positions, with combination weights that are themselves estimated from a fusion of the initial deterministic restoration, and other auxiliary guidance (diffusion timestep and degradation type were mentioned). In addition, a new inter-step patch-splitting procedure that is integrated with the diffusion denoising process is also used for patch-wise high resolution restoration, to overcome problems with artifacts at patch boundaries. Experiments were conducted on the SID dataset [7] for low-light denoising, on the GoPro dataset [37] for motion deblurring, and on ImageNet JPEG compression artifact restoration. Strengths: Generally there is decent novelty. Although the main ideas are taken from existing methods, the combination of these ideas are relatively new and not immediately obvious. The use of basis kernels from [50] for creating spatially-variant convolutional kernels for the diffusion denoiser is quite interesting. The inter-step patch splitting, although simple, appears to be new and effective, with Fig. 6 quite convincing. Experimental results substantially exceed current SOTA on perceptual metrics (LPIPS, NIQE, FID, KID), though more mediocre on distortion metrics (PSNR, SSIM). It is noted though that the DvSR network was retrained by the authors to match the same training hyperparameters that are more lightweight, due to inaccessible computational requirements of DvSR; otherwise the published results in [49] are better. Weaknesses: On experiments: - Since the method is essentially derived from DvSR, and the authors are retraining it based on their hyperparameters, it is important to conduct experiments with DvSR on the other low-light denoising and JPEG restoration tasks as well. - Generalization tests from GoPro to the HIDE and RealBlur datasets have been done in various other baseline papers. The authors should have also tested on these. - The NIQE numbers don't match the ones in [49], though the other perceptual metrics do. If they are different, the authors should properly justify in the paper why this is the case. Multiple parts of the paper are unclear, as details have been left out. - How is the auxiliary scalar information represented? For example, how are timesteps encoded? Details should at least be in the supplementary material. - What exactly is happening in the inter-step patch-splitting procedure? Figure 2 is very vague. Although the results look good, and a reader can perhaps roughly guess what is happening, there are no details given. - The descriptions in Section 4.7 Ablation Studies are very vague and need to be explained better. - What happens in "w/o spatial guidance"? How does time and other scalar information get injected (or not)? - What are "internal feature guidance", and "degraded image guidance"? These need clearer explanation. The paper is littered with spelling errors. This can simply be resolved by running a spell-checker; that it is not done suggests poor neglect on the part of the authors. Some instances: - L120: "Restoraion" => "Restoration" - L210: "receipt" => "receptive" - L257: "JEPG" => "JPEG" Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please answer the questions raised in the Weaknesses section. The baseline methods are for the different restoration tasks are barely overlapping. How is the decision for the choice of baselines for each dataset? For example, Uformer is tested on both SID and GoPro, but not Restormer. Why aren't they be tested across the different tasks / datasets? While it is generally accepted that spatially variant kernels can be useful in many situations, I think it is worth a discussion when it comes to denoising residuals. Intuitively we may expect the residuals have a more uniform spatial distribution, so kernels don't have to be spatially varying. This is unlike convolutions on normal images with strong spatially dependent image structures. But it seems the results suggest otherwise. Can the authors provide a better understanding on this? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The technical limitations given in Section 6 are quite reasonable. However, the authors should also include some discussion about potential negative societal impact, since image restoration can also lead to hallucination of details that were not present in the original images. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > It is suggested to conduct experiments with DvSR on the low-light denoising and JPEG restoration tasks Thanks for the advice. We will try the DvSR [4] on low-light denoising and JPEG restoration tasks. In the Table 1 and Figure 1 of the newly attached PDF, we show the latest LPIPS and the visualization of DvSR respectively on the extreme low-light denoising. > Generalization tests on the deblurring dataset (HIDE and RealBlur) We will conduct the generalization tests. Some preliminary results on HIDE dataset are show in Table 2 of the newly attached PDF. Our method consistently outperforms the Uformer. > Inconsistency of NIQE numbers with existing papers We also noticed this phenomenon in the experiments. Various implementations of NIQE [1-3] report different numerical values but maintain the same rank order. To ensure a fair comparison, we adopted the NIQE implementation from basicSR [2] for all methods in our experiments. We will revise the paper to make it clear. > Unclear parts and typos of the paper. We thank the reviewer for pointing out the unclear parts. We will revise them as follows: - We follow the SR3 to employ positional encoding and a stack of two linear layers with a Swish activation function to process all auxiliary scalars. - To enhance clarity, we will include a figure and provide a more detailed explanation to present the inter-step patch-splitting procedure. - For the 'w/o spatial guidance' variant, we exclude the spatial guidance part of the AKGM module while retaining other scalar information, which is injected by adding it to the feature map. - For each AKGM block, the spatial guidance is generated using the output of the initial predictor. The 'internal feature guidance' employs the intermediate feature map within each block to generate spatial guidance. Additionally, the 'degraded image guidance' uses the input degraded image to generate spatial guidance. > Questions: - How is the decision for the choice of baselines for each dataset? We decided to adopt existing the state-of-the-art methods as the backbones for each task, which might be of different architectures. It is a good idea to add Restormer on GoPro dataset, which would be more consistent between different tasks. Since the Uformer shows better performance than Restormer on GoPro dataset, we select Uformer as the representative method at that time. For the SID dataset, they do not show the results on it, so we adopt all the strong backbones for comparisons. - The understanding of adopting spatially variant kernels on the residual domain. Although the residuals may visually appear more "uniform" when visualized, their inherent properties persist. Texture areas remain challenging to learn, making the adoption of spatially adaptive kernels more effective in handling the task. Additionally, considering that the residuals contain high-frequency information while the initial predictor output includes low-frequency components, employing spatially adaptive kernels becomes a natural choice to address the high-frequency details adaptively. [1] https://github.com/chaofengc/IQA-PyTorch [2] https://github.com/xinntao/EDVR/blob/master/basicsr/metrics/niqe.py [3] https://github.com/aizvorski/video-quality/blob/master/niqe.py [4] Whang et. al., Deblurring via stochastic refinement. CVPR, 2022. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their additional experiments and further clarifications. If the paper is accepted, please provide more clarification details and more extensive experimental results, beyond the scope of a rebuttal.
Summary: This paper proposes a unified framework for diffusion-based framework. In the proposed framework, an initial predictor is used to produce a rough restoration of the input degraded image. The roughly restored image is then used as condition to the diffusion model using a Conditional Integration Module. Experiments show that the proposed approach achieves promising results on low-light denoising, debarring, and JPEG restoration. Strengths: 1. The proposed method achieves promising performance on three different restoration tasks. The proposed framework is versatile in the sense that it can handle multiple types of degradations. 2. This paper is easy to follow. Weaknesses: 1. The technical novelty is limited. In particular, the residual prediction formulation was proposed in [49]. The inter-step patch-splitting strategy has been discussed in [*1]. 2. Many of the proposed components do not lead to significant performance gain. Specifically, in Table 4, the worst model obtains a LPIPS of 0.253, which is still significant better than existing works in Table 1. The performance gain from the proposed components are insignificant. 3. Given the insignificant performance gain mentioned above, the performance gain could come from the increase of model complexity. Therefore, it is advised to provide additional analysis on the model to demonstrate the superiority of the proposed method. [*1] Álvaro Barbero Jiménez, Mixture of Diffusers for scene composition and high resolution image generation, 2023 Technical Quality: 3 good Clarity: 3 good Questions for Authors: While the proposed method achieves promising performance on multiple restoration tasks, the novelty is limited and the source of performance gain is unclear. Please refer to the weakness section for the questions. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations are appropriately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The technical novelty is limited (residual prediction formulation and the inter-step patch-splitting strategy.). As described in L67-L81, we actually did not claim the residual prediction as one of our contributions. The residual formulation is a common practice and has been adopted in many previous methods in low-level vision. In this paper, our contributions are a unified conditional framework and an Adaptive Kernel Guidance Block (AKGB) to incorporate the conditional information into diffusion models. The inter-step patch-splitting strategy shares the similar spirit with the unpublished paper [1]. The key difference is that [1] needs to fuse the overlapped regions by the Gaussian weights since it adopts several diffusion models on high-level vision tasks, which need to understand the large receptive field sematic information. Our method is designed for low-level vison tasks, which mostly focus on local textures. Therefore, we can perform the padded patch inference without considering fusing in each diffusion step. > On the SID dataset, the proposed method outperforms existing regression works significantly. The performance gain from the proposed components is insignificant. When facing severe degradation or highly ill-posed tasks (such as extreme low-light denoising), generative methods tend to outperform regression methods due to the availability of multiple potential candidates for each input. Generative methods can generate more diverse and realistic appearance, , while regression methods tend to predict averaged results, leading to blurry outcomes. Consequently, on the SID dataset, a substantial improvement in terms of LPIPS is observed. Conversely, for the GoPro dataset (Table 2 in our paper), where the problem is less ill-posed, such the performance gain is smaller. Please note that, unlike PSNR, LPIPS is not scaled by the log function, resulting in smaller quantitative improvements. However, even small LPIPS improvements correspond to a significant gain. For instance, in Table 2 of our paper, our method improves LPIPS by 0.007 compared to Uformer, yet the corresponding FID improvement exceeds 12.5, highlighting the substantial qualitative gains achieved. > Additional analysis on the model complexity We compare the model complexity with the DvSR [3] in the ablation studies. As indicated in Table 1 of the newly attached PDF, our approach exhibits comparable computational cost to DvSR while achieving significantly better perceptual metrics. The corresponding visual improvements are presented in Figure 1. [1] Mixture of Diff users for scene composition and high-resolution image generation [2] EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. I have the following additional questions: 1. I do not understand the sentence `Therefore, we can perform the padded patch inference without considering fusing in each diffusion step.` in the response. From Fig.2, I believe that fusion is performed at each step? 2. Regarding the performance gain: **a.** What I mean is even the worst model in Table 4 is already significantly better than the models in Table 1 (i.e., LPIPS 0.253 of `Degraded Image Guidance` vs 0.338 of `Uformer`). The performance gain brought by the proposed components is 0.253-0.222=0.031 vs the improvements from baseline over previous work 0.338-0.253=0.085. **b.** When we further compare `Regression` vs `Uformer` in Table 1 and Table 2, we see that the proposed architecture underperforms previous works. This could mean that the proposed guidance is ineffective. Please correct me if I am wrong. 3. Speaking of universal framework, the submission seems to be missing related works on degradation modeling for super-resolution. In particular, DASR [*1] encodes the input into an embedding. Conceptually the proposed method shares some similarities. It is advised to discuss and compare with methods in the corresponding domain. [*1] Liang et al., Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution, ECCV, 2022 --- Reply to Comment 1.1.1: Title: Response to the additional questions from Reviewer KQ6K Comment: Thanks for your additional questions. > 1. The patches fusion at each diffusion step. Our proposed inter-step splitting strategy does not involve weighted fusion in the overlapped regions of patch. Instead, for each overlapped patch, only the centered and non-overlapped regions would be cropped and kept in the results. > 2.a The improvement from the baseline is larger than the improvement from the proposed components The reason for the large improvements of the baseline model is that the task (extreme low-light denoising) is highly ill-posed . Regression methods tend to produce more blurry images due to the availability of multiple candidate results . This leads to regression method having higher PSNR and lower LPIPS (see Figure 2 of our newly attached PDF). As a result, the baseline model looks to improve a lot. When the task is less ill-posed (Table 2 GoPro dataset), the corresponding improvements would not be similarly large. But this does not indicate the improvement of the proposed modules is insignificant. 1. Small LPIPS improvements still correspond to a significant gain. For instance, in Table 2 of our paper, our method improves LPIPS by 0.007 compared to Uformer, yet the corresponding FID improvement exceeds 12.5, highlighting the substantial qualitative gains achieved. 2. We compare with DvSR [2] in the newly attached PDF in Table 1 and Figure 1. Both the quantitative and the qualitative results demonstrate significant improvements of our method. > 2.b The proposed architecture underperforms previous works like Uformer. A direct comparison between the Uformer and regression models is not feasible. The proposed components are designed for diffusion models to integrate the conditional information effectively instead of outperforming previous methods in regression tasks. Here, we only want to provide a regression result and show the difference between the regression and generation models under similar settings. As a result, the training settings as well as the distinct characteristics of their architectures are quite different with the Uformer. It is out of our paper’s scope to decide its effectiveness on regression tasks. > 3. Discussion with degradation modeling methods DASR is designed to learn the degradation representations for the real-world super-resolution task. While a direct comparison between our method and DASR [1] is not applicable, it is very interesting to discuss degradation modeling from the universal framework perspective in our paper. Please let us know if the above responses have further clarified our paper. [1] Liang et al., Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution, ECCV, 2022. [2] Whang et. al., Deblurring via stochastic refinement. CVPR, 2022.
Summary: This paper introduces a unified conditional framework for image restoration tasks based on diffusion models. The framework utilizes a UNet to predict initial guidance and incorporates multi-source conditional information into each block to enhance the generative model's guidance. Adaptive Kernel Guidance Block (AKGB) dynamically fuses kernels in each block. Additionally, a patch-splitting strategy is introduced to handle high-resolution images, ensuring the consistent generation of high-resolution images without grid artifacts. Extensive experiments are conducted on extreme low-light denoising, image deblurring, and JPEG restoration. Strengths: • The setting for generative image restoration is very interesting and should be a promising direction, as it addresses the limitation of existing methods that primarily focus on improving PSNR without consistent visual quality according to human perception. In recent low-level vision research, although PSNR improvements have been observed, the enhancements in visual quality have been minimal. This paper provides a good starting point for generative image restoration. • The challenge of maintaining consistency in high-resolution images is important for generative models. The proposed ‘inter-step patch-splitting’ idea is cute and effective to solve this problem for diffusion models. • The method demonstrates good generalization across three different image restoration tasks on perceptual quality. • The visualization results are impressive. With the perception-oriented target, the method seems to improve the perception quality a lot. Weaknesses: • The guidance map plays an important role in this method, which output the low-frequency information and serves as the spatial guidance for each block. Visualizing the guidance map can help readers to understand this method better. • While the 'Inter-step Patch-splitting Strategy' is cute, it would be valuable to include quantitative results to demonstrate its effectiveness. • There are some mirror typos. Caption of Fig.1; L286 'LayerNomr -> GroupNorm' ; L158 high light; Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: • See the weaknesses. • It is highly recommended to release the code to encourage further research. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: In the Limitation and Future Direction section, the authors point out that the method may generate some unnatural textures. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Visualization of the guidance map We appreciate the reviewer's recognition of our efforts to enhance the perceptual quality and qualitative results of our method. In the revised version, we will include visualizations of the guidance map to provide further insights into our approach. > It is suggested to add quantitative results for the patch-splitting strategy. Thank you for your valuable advice. We will include quantitative results around the overlapped regions to show the effectiveness of the patch-splitting strategy. > Minor issues and code: Minor issues will be fixed, and the code is preparing. --- Rebuttal 2: Comment: Reviewer V6gu: We need you to respond to the rebuttal ASAP --- Rebuttal Comment 2.1: Comment: After reading the author's response and discussions with other reviewers, my concerns have been well addressed, and the weakness of this paper can be fixed in the final version. I think this paper highlights an important direction to improve the visual quality in low-level vision. Moreover, the visualization results are notably impressive. Consequently, I'd like to change my rating to Accept.
Summary: This paper proposes a novel framework for supervised image restoration. The framework consists of an Initial predictor and a newly designed conditional diffusion model. The initial predictor first produces an initial restoration result, then the conditional diffusion takes the initial result as well as the degradation image as inputs and progressively generates the final result. Besides, the authors also propose a novel solution for restoring high-resolution images. Experiments on low-light denoising, image deblurring, and JPEG restoration show that the proposed framework achieves superior performance in terms of perceptual metrics. Strengths: 1. The solution for high-resolution image restoration seems novel and inspiring. 2. The proposed method achieves superior performance in three restoration tasks in terms of perceptual metrics. 3. The proposed denoiser backbone seems reasonable. 4. The paper is well-written and easy to follow. Weaknesses: 1. Though seems reasonable, there is no evidence that proves the advantage of the designed backbone. At least, the authors should compare it to the denoiser backbone used in SR3 [1]. This is necessary when designing a new backbone instead of using existing ones. 2. It seems that the performance on distortion metrics (i.e., PSNR, SSIM) lags far behind state-of-the-art methods. 3. Some minor issues. (1) The conference information in your citations needs to be updated. For example, SR3 [1], DDNM [2], and LDM [3] should be TPAMI, ICLR, and CVPR, rather than arXiv, CoRR, and None. (2) There exist other methods [3, 4] for high-resolution image restoration, it is appropriate to include them in the discussion. [1] Saharia et. al., Image super-resolution via iterative refinement. TPAMI 2022. [2] Wang et. al., Zero-shot image restoration using denoising diffusion null-space model. ICLR 2023. [3] Rombach et. al., High-resolution image synthesis with latent diffusion models. CVPR 2022. [4] Wang et. al., Unlimited-size diffusion restoration. CVPRW 2023. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Please see the weaknesses. 2. I wonder if the proposed framework can be used for general image-to-image translation tasks, or benefit the text-to-image task. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Given the concerns mentioned in the weaknesses, I give a borderline score. However, I may also change my score if there is enough evidence of the profound value of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > It is suggested to compare the proposed method with the existing backbone. To demonstrate the advantage of our designed backbone, we included an additional existing method, DvSR [3], in the ablation studies. As indicated in Table 1 of the newly attached PDF, our approach exhibits comparable computational cost to DvSR while achieving significantly better performance in terms of perceptual metrics. The corresponding visual improvements are presented in Figure 1. > The concern that the performance of the proposed method on distortion metrics (PSNR, SSIM) lags far behind state-of-the-art methods Note that PSNR and SSIM are actually not directly related to perceptual quality of the restored images. Most previous methods are deterministic and PSNR-oriented, resulting in predicting averaged results with limited visual improvements and higher PSNR, often appearing blurry [4]. Some visualization can be found in Figure 2 of the newly attached PDF. In contrast, our method learns to predict the potential distribution, which may affect PSNR results but significantly enhances visual quality, yielding sharp and visually appealing outcomes. > Minor issues. Thank for your kind comments. We will update the citations and discuss the existing latent space [1] and Mask-Shift [2] solutions. It’s possible to extend to other image-to-image translation tasks (colorization, deraining, etc). We are actively working on incorporating these tasks and will present them in the final version. [1] Rombach et. al., High-resolution image synthesis with latent diffusion models. CVPR 2022. [2] Wang et. al., Unlimited-size diffusion restoration. CVPRW 2023. [3] Whang et. al., Deblurring via stochastic refinement. CVPR, 2022. [4] Sajjadi et. al., EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis. ICCV 2017. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I will keep my rating. --- Rebuttal 2: Comment: Reviewer 8ANp: Your review does not justify your recommendation to (borderline accept). Your response to the authors rebuttal is not informative. Unless there is further substantive discussion, I will ignore this review.
Rebuttal 1: Rebuttal: We thank reviewers for their valuable and professional comments. Overall, reviewers (4fFj, 8ANp, V6gu, 37QZ) acknowledge the novelty and the performance of our paper. We have uploaded a one-page PDF containing figures and tables to help us addressing the reviewers' concerns, and then, we will respond to each reviewer individually. Pdf: /pdf/ce282609023c4f5ab8707ed472b9b65a0803eba8.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a new framework for image restoration with diffusion models. They design strategies to better use the condition information and also propose a strategy for high-solution images. The results are competitive over baselines. The presentation is clear. I tend to accept this paper. Strengths: - The performance is strong. It outperforms existing baselines on several tasks and datasets quantitatively. The perceptual performance is also good. The data fidelity and details are good. - This paper proposes detailed experiments and analysis for each design of the framework, which helps the readers to understand their method. However, I would say that it would be better to provide some results on high-resolution images. - The paper writing is good and clear. - Most results are obtained on real-world datasets. Weaknesses: - It would be better to provide some results on high-resolution images. - It would be better to provide results on more image restoration tasks such as colorization, inpainting. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weakness. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Provide more high-resolution image results In our paper, the testing images of SID dataset are high-resolution (4256x2848). Due to the limited space, we only show some crops in fig.3 and provide some full-resolution images in the supplementary material. We will revise paper to make this point clear. > Extension to more image restoration tasks such as colorization and inpainting It’s possible to extend the method to other img2img tasks like colorization, deraining, etc. We are actively working on incorporating these tasks and will present them in the final version. --- Rebuttal Comment 1.1: Title: Keep my rating as borderline accept Comment: Thanks for your response. I will keep my rating as borderline accept. --- Rebuttal 2: Comment: Reviewer 4fFj: Your review says Soundness: 2 fair, Presentation: 2 fair, Contribution: 2 fair, but you recommended borderline accept. This doesn't make much sense. The rebuttal and your response to it are also uninformative. Unless there is additional meaningful discussion, I will be forced to ignore this review. --- Rebuttal Comment 2.1: Title: More details Comment: It solves my concerns. 1) It argues that "A simple yet effective inter-step patch-splitting strategy is proposed for handling high76 resolution images. This practical strategy enables diffusion models to generate consistent **high-resolution** images without grid artifacts." However, I did not notice these results. Hence, I ask the authors to provide some results. They explain the reasons and I buy their explanations. 2) I hope results on different tasks can be provided. They agree and promise that they will update it in the final version.
null
null
null
null
null
null
Leveraging the two-timescale regime to demonstrate convergence of neural networks
Accept (poster)
Summary: This paper studies the training of 2-layer neural networks and proposes a two-timescale limit/regime. In this limit/regime, the learning rate of the first layer is much smaller than the learning rate of the second layer. As a result, the training of the network can be viewed as training the first layer and performing linear regression over the first layer outputs, which is simpler/more structured than training both layers at the same rate. To demonstrate the usefulness of this strategy, this paper considers a toy model and shows that a 2-layer network can fit a certain family of 1D piecewise constant functions. The authors also empirically show that SGD can fail to fit this family of functions outside the two-timescale regime. Strengths: * The presentation is clear. * This paper not only contains results in the $\varepsilon \to 0$ limit, but also non-asymptotic results for small but non-vanishing $\varepsilon$. The derivation in the $\eta \to 0, \varepsilon \to 0$ limit is clean (Sec. 4.2), and the asymptotic-to-non-asymptotic parts are also easy to follow. * This paper reminds me of a technique that is gaining popularity in the theory community: training the network for one (large) step, freezing the first layer, and then performing linear regression in the second layer using the features learned in the first layer (cf. [AAM22], [DLS22]). The limitation of this technique is that since the first layer is fixed after the first layer, it lacks the ability to refine the learned features. I feel the strategy introduced in this paper is a potential remedy to this problem, as here we also have the linear regression part of the argument, but the first layer can be trained for multiple steps. [AAM22] Abbe, Emmanuel, Enric Boix Adsera, and Theodor Misiakiewicz. “The Merged-Staircase Property: A Necessary and Nearly Sufficient Condition for SGD Learning of Sparse Functions on Two-Layer Neural Networks.” In Proceedings of Thirty Fifth Conference on Learning Theory, 4782–4887. PMLR, 2022. https://proceedings.mlr.press/v178/abbe22a.html. [DLS22] Damian, Alex, Jason D. Lee, and Mahdi Soltanolkotabi. “Neural Networks Can Learn Representations with Gradient Descent.” arXiv, June 30, 2022. http://arxiv.org/abs/2206.15144. Weaknesses: * Almost all things in this paper are 1D and somewhat tailored to this specific piecewise constant function class. I wonder whether/how this can be generalized to higher dimensions, other network architectures, and more general function classes. * It seems that the dynamics are still relatively local. That is, we need some neurons in each target interval at initialization, which may not be reasonable when the dimension is high. * The authors should probably add some discussion on the training-for-one-large-step-type technique (see the Strengths part of the review). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: * How general is this strategy? Is it possible to apply this strategy to problems (even toy ones) with dimension higher than $1$ and more standard 2-layer networks? * Can we relax the requirement on the initialization? For example, can we remove the requirement that each interval contains at least $6$ $u_j$? Intuitively, when some interval contains no neurons, it is still possible that some neurons from those fitted intervals can move to that interval and fit the target function on that interval. I know the current requirement is reasonable in your setting, I am asking this because I feel this type of global feature learning process is important when the dimension is high. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: This is a theoretical work and, as far as I can see, has no potential negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > How general is this strategy? Is it possible to apply this strategy to problems (even toy ones) with dimension higher than 1 and more standard 2-layer networks? We have evidence showing that **the two-timescale strategy applies to other settings (higher-dimensional problems, ReLU networks for approximating piecewise-affine functions), see the common rebuttal**. Nevertheless, a full general understanding of the impact of the two-timescale regime is an open question, which is left for future work. > Can we relax the requirement on the initialization? For example, can we remove the requirement that each interval contains at least $6 u_j$ ? Intuitively, when some interval contains no neurons, it is still possible that some neurons from those fitted intervals can move to that interval and fit the target function on that interval. I know the current requirement is reasonable in your setting, I am asking this because I feel this type of global feature learning process is important when the dimension is high. We are rather pessimistic, for the following reason. It is reasonably easy to craft an example of a local minimum of the two-timescale problem (equation (5) of the paper) where a discontinuity of the target function is not covered by any neuron. **Without any assumption on the initialization, it is possible that the gradient flow dynamics may fall into this local minimum.** Avoiding such minima is a challenging next step which would certainly require additional ingredients. We will add this discussion (and exhibit such an example of a local minimum) in the next version of the paper. > The authors should probably add some discussion on the training-for-one-large-step-type technique (see the Strengths part of the review). Finally, we agree that the two-timescale regime can be seen as a refinement of layer-wise training, where we cyclically take one gradient step in the inner layer and many gradient steps in the outer layer. We thank you for the additional references; we will make sure to add them in the paper and discuss the connection. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I will keep my score.
Summary: This paper studied the problem of fitting a piecewise constant univariate function with a shallow neural network. Specifically, the authors consider a gradient flow with a time-scale difference between the dynamics of the first and the second layer weights. It is shown that the trained shallow network can be arbitrarily close to the target function in $\mathcal{L}_2$, as long as the weight update on the first layer is much slower than one on the second layer. Strengths: The two-time-scale regime seems novel in training neural networks. The results are well presented and their proofs are clearly explained. Weaknesses: Only a very special problem is studied. 1. The target function is univariate, piece-wise constant. This is very restrictive. 2. The loss is a population loss, thus the case of finite samples (fixed design or random samples) is not considered. 3. Network is quite different from those studied in other works. The activation function is a smoothed version of a stair function; the first layer is only parametrized by the bias at every neuron. With this many assumptions/restrictions, even though the main results are well presented and explained, it is hard to see whether those observations can give insights into the usefulness of implementing two-time-scale training for practical networks. The author did not discuss the limitations of these assumptions, nor did they show how two-time-scale training can be used in practice, even empirically. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See "Weaknesses" Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See "Weaknesses" Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We agree with the reviewer that the main focus of this paper is on the study of a specific problem and that it does not readily lead to practical implications. Below, we detail why we still believe that our work contributes towards a theory of neural networks. **The theory of the optimization of neural networks is a notoriously hard problem; consequently, research on this topic makes progress at small steps.** For instance, the NeurIPS paper of Safran et al. (2022), to which we compare in detail, is similarly restricted. The other related works to which we compare in Section 3 are also restricted to specific settings: for instance, the mean-field setting is restricted to infinitely large neural networks and does not lead to quantitative convergence rates; the neural tangent kernel approach is also restricted to large neural networks and does not explain feature learning. Our work also has important restrictions, but is novel in providing a theory for neural networks with a moderate width and feature learning dynamics. Moreover, our description of the non-linear dynamics is remarkably precise and quantitative, compared to the other approaches. Furthermore, our paper underlies the **general applicability of the two-timescale technique** (see Section 4.1), beyond the specific one-dimensional illustration that we study in depth. The two-timescale regime is mathematically tractable if one can solve the two-timescale limit equations (5) of the paper. In this paper, we restrict ourselves to a simple setting for which the solution is simple (Section 4.2). This enables a fluid illustration of the central concept of this paper, the two-timescale regime. However, **any other setting for which Eqs. (5) can be solved could lead to a convergence analysis following the same lines.** More precisely, we have the following expectations for the generalizations that the reviewer suggests: 1. We believe that **the generalization to higher dimensional problems is possible, as shown by simulations**, although it is a theoretical challenge, which we reserve for future work. We develop this point in the **common part of the rebuttal**. 2. **Our results can be generalized to finite sample sizes** (say, for single-pass SGD) at the cost of greater technicalities in controlling the deviation from the two-timescale limit. This follows a perturbation analysis similar to the one of Section 5.2, with additional terms due to random sampling. This setting actually corresponds to the experiment described in lines 279-281, that shows that SGD indeed closely follows the behavior of the two-timescale limit. We will mention this possible generalization and add a rough sketch of proof in the next version of the paper. 3. We believe that **our results can be adapted to ReLU networks for approximating piecewise-affine functions, as shown by simulations** and as stated in the conclusion of the paper. Simulations show that the expected alignment of neurons with the discontinuities (of the derivative) still holds **(see common part of the rebuttal)**, although the mathematical derivations is more arduous. In dimension $1$, we also believe that **it should be possible to analyze more standard parametrizations of the inner layer** of the form $$f(x; a, u) = a_0 + \sum_{j=1}^m a_{j} \sigma_\eta (v_j x - u_{j}).$$ However, this would make the parametrization of the “position” of neuron $j$, namely $u_j / v_j$, less natural, and thus would add technical difficulties to the proofs. We avoided this technicality for simplicity. --- Rebuttal Comment 1.1: Comment: Thank the authors for the response. I personally still think the assumptions (1. and 3. in my original review) are very strong. Nonetheless, the authors provide additional numerical experiments in the rebuttal showing these assumptions might be relaxed, thus I raised the score. --- Reply to Comment 1.1.1: Comment: Thank you for your questions that contribute to improving the paper by adding a thorough discussion on possible generalizations, and for your willingness to raise the score.
Summary: In this paper, the authors considered the problem of learning piece-wise linear function in 1d using two-layer neural network. They considered gradient flow on mean-square loss with different learning rates for 2 layers (two-timescale). Specifically, the outer layer weights are moving much faster than the inner layer weights. The activation used in 2-layer network is similar to a rescale version of sigmoid activation. This paper shows that under proper choice of the parameters, GF converges to small loss within polynomial time in relevant parameters. Experiments are provided to support the results. Strengths: 1. Understanding the training dynamics and convergence of neural networks is an important problem. 2. The paper is overall easy to follow and clearly written. The proof sketch is given so that the reader can understand the main proof idea. 3. The idea of this two-timescale/two different learning rates in analyzing the training dynamics for neural networks seems to be interesting. Weaknesses: 1. The problem considered is only in 1d and it would be interesting to see if the analysis could be generalized to multi-dimension. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In the experiments, I was wondering if one could elaborate in Figure 5 that why ‘the dynamics create a zone with no neuron’. It seems to me that there are neurons near every discontinuity of $f^*$. Also, it seems in Figure 4(c) and Figure 5(c), the number of steps for training are different, what is the reason for that? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitation is discussed in the paper. This is a theoretical work and therefore no foresee negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The problem considered is only in 1d and it would be interesting to see if the analysis could be generalized to multi-dimension. This question is shared with the other reviewers and is adressed in the common rebuttal. > In the experiments, I was wondering if one could elaborate in Figure 5 that why ‘the dynamics create a zone with no neuron’. It seems to me that there are neurons near every discontinuity of $f^*$. In Figures 4 and 5, **the neurons are represented by vertical dashed lines** (the orange round markers do not represent neurons, they are only present for black&white and color blind lisibility). In Figure 5(c), the discontinuity of the target function at $x=0.5$ is not covered by a neuron: the neural network has a flat zone between $0.41$ and $0.6$, whereas it should jump at $0.5$ if it were optimal. This stands in sharp contrast with Figure 4(c) where neurons are evenly distributed on the whole interval. We will improve the readability of this figure in the next version. > Also, it seems in Figure 4(c) and Figure 5(c), the number of steps for training are different, what is the reason for that? It could seem like we are favouring the algorithm in the two-timescale regime by running it on a larger number of steps. In fact, it is not the case because **we ran both algorithms until convergence**; running the algorithm with $\varepsilon = 1$ for a larger number of steps would not lead to an improvement. Further, we can elaborate on the difference of steps to reach convergence in the two cases. There are two levels of analysis to explain this difference. The most straightforward reason is that the number of steps should scale more or less as the inverse of $\varepsilon$, since each step (in $u$) gets scaled down by $\varepsilon$ in the two-timescale regime. This explains why the number of steps is much larger in Figure 4. However, you may note that it is not larger by a factor $1/\varepsilon = 50{,}000$, but “only” by a factor $\simeq200$. To understand why, we have to get more into the details of the dynamics, to understand the order of magnitude of the number of steps required before convergence. In Figure 4, the limiting factor for convergence is the movement of the positions $u$. At each step, they move by an order of $\varepsilon h \simeq 10^{-10}$. The positions need to move on a scale of $5 \cdot 10^{-2}$ to align with the discontinuities of the target. Hence the required number of steps is $0.05 / (2 \cdot 10^{-10}) \simeq 2 \cdot 10^8$. In Figure 5, on the contrary, the limiting factor for convergence is the movement of the weights $a$. They move by an order of $h \simeq 10^{-5}$ at each step, and they need to move by a distance of $\simeq 5$, necessitating $\simeq 5 \cdot 10^5$ steps to achieve convergence. We will include a precision on this point in the camera-ready paper if accepted. --- Rebuttal Comment 1.1: Comment: Thanks for the response to address my question. I will keep my score.
Summary: The paper studies the training dynamics of fitting a one hidden layer shallow network with heaviside activation to a piecewise ground truth function with one-dimensional input. It proves that gradient flow always recovers the ground truth in finite time with only mild over-parametrization. Strengths: This paper is well-written, and intuitions behind technical proofs are well-presented, and theoretical results are well-supported by numerical experiments. The theoretical result itself is a nice observation, despite being in the simple one-dimensional input case. Weaknesses: The paper only considers the one-dimensional case. Based on the proof techniques, it is not clear if it is extendable to high dimension, which is of ultimate interests in the deep learning theory community, since the derivation in section 4.2 would not hold any more. While it is understandable that such a result would be difficult to obtain in high dimension, the paper didn't present any experimental results in the high-dimensional case either. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: The paper would greatly benefit from a few comments on whether the authors would expect the same phenomenon to be observed in high dimension and what the difficulty would be. Some numerical experiments in the high dimensional case would be a great addition as well. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Your question on the generalization to higher-dimensional problems, including numerical experiments, is shared with the other reviewers. It is thus addressed in the common rebuttal.
Rebuttal 1: Rebuttal: Dear reviewers, We warmly thank you for your time and relevant comments, which will help us improve our work. If accepted, we will take into account your suggestions, making use of the additional page. Since all reviewers raised the relevant question of the **applicability of our approach to more general settings**, in particular beyond dimension 1, we address it below, and leave the answers to other questions in individual responses. Sincerely, The authors ------ This paper introduces the two-timescale regime in a general setting (Section 4.1). The choice of the specific setting we analyze in detail (1D, piecewise-constant target, and bias-only first layer) is motivated by the simplicity to derive mathematically tractable expressions for dynamics of the two-timescale limit (Section 4.2). Relaxing these assumptions makes the analysis more complicated and is postponed to future work. Nevertheless, following the comments of the reviewers, we next present preliminary experimental results in dimension larger than 1 and for ReLU networks. **Dimension larger than 1.** We first emphasize that piecewise constant functions are a complicated class in higher dimensions; for example, a neural network with sigmoid activations is able to learn such a function only under certain conditions, including that the shape of the constant regions should be polygons. As a first step, on $\mathbb{R}^d$ $(d > 1)$, we consider multivariate neural networks of the form $$f(x; a, u) = a_0 + \sum_{j=1}^m \sum_{k=1}^d a_{jk} \sigma_\eta (x_k - u_{jk}),$$ where the $j$-th neuron has a $d$-dimensional position $(u_{jk})$ and a $d$-dimensional weight $(a_{jk})_{1 \leq k \leq d}$. To ensure that the neural network can approximate the target piecewise constant function, we choose a target function of the same form. We then perform similar experiments as in Figures 4 and 5 in the main paper (and Figure 7 in the appendix), that is, to train the neural network with gradient descent, both in a two-timescale regime and in the standard regime. We report the results in the attached PDF file for $d=2$ and $d=10$. In a nutshell, **the conclusions are similar to the paper: recovery can fail in the standard regime**, due to discontinuities of the target function that are not covered by neurons after training (or, in other words, the gradient descent converges to a local minimum). **By keeping the exact same setting but lowering epsilon to transition into the two-timescale regime, we obtain convergence to a global minimum.** We expect that the corresponding mathematical derivation should be doable with similar tools as in our proof, at the cost of significant additional technicalities. Further generalizations beyond this somewhat artificial example are left for future work. **ReLU networks.** In a similar spirit, it is also possible to consider the case of using **ReLU activations to approximate piecewise-affine targets**. The results are also reported in the attached PDF file (for dimension 1, but we could mix both extensions and also consider higher-dimensional problems in this case), and **a similar conclusion holds**. Here again, there are additional technical difficulties that complicate the proof, but we believe that there is no fundamental reason that our mathematical approach should not apply to this case. These discussions and experiments will be added in the next version of the paper. Pdf: /pdf/27111b348e37716450ecc7253a54a9fb6cc78552.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
TrojLLM: A Black-box Trojan Prompt Attack on Large Language Models
Accept (poster)
Summary: This paper presents a new attack against black-box, prompt-based PLMs. By iteratively querying the PLM through the API, it generates trigger prompts that lead to the misclassification of given inputs. Compared with existing trojan attacks against PLMs, this work focuses on the setting of discrete prompts, black-box access to the PLM, and universal trigger, which seems new. Strengths: - To my best knowledge, this seems to be the first trojan attack focusing on the setting of discrete prompts, black-box access, and universal triggers. - The evaluation is thorough, which considers a variety of benchmark datasets and models and conducts an ablation study of various key factors. - The paper is well written and easy to follow. Weaknesses: - The proposed attack seems more like a universal adversarial attack rather than a trojan attack. Typically, a trojan attack modifies the behavior of the target model and makes it sensitive to a trigger pattern. Apparently, the proposed attack doesn't touch the target model. It is suggested to clarify the attack definition. - The threat model needs more justification. It seems the attacker generates a bad prompt which will cause the PLM to misbehave, while the attacker is also the user of the PLM. Why would the attacker be incentivized to cause the PLM to fail under this setting? - The evaluation mostly focuses on Bert-like and GPT2-like models. Given the popularity of large-scale PLMs (e.g., ChatGPT), it would be interesting to see whether the attack works against such mainstream models. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - Clarify why this is a trojan attack by relating to previous literature. - Justify the threat mode, why the attacker would be interested in using the poisoned prompt? - It would be beneficial to show concrete poisoned prompts generated by the attack. - Would the attack work against the mainstream PLMs (e.g., ChatGPT)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The limitations are adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the careful reading of the manuscript and constructive comments. **Question 1: The proposed attack seems more like a universal adversarial attack rather than a trojan attack. Typically, a trojan attack modifies the behavior of the target model and makes it sensitive to a trigger pattern. Apparently, the proposed attack doesn't touch the target model. It is suggested to clarify the attack definition.** Indeed, in prompt tuning, the prompt can be thought of more like an additional set of model parameters rather than as part of the input data. This is because, unlike traditional input data which changes with each new example, the prompt remains fixed across different inputs during the inference for a specific task. Our prior works including BadPrompt and PPT both inject Trojans into prompt. Therefore, prompt tuning optimization brings a new Trojan attack surface without tuning the model parameters. The triggers utilized in our approach behave akin to Trojan triggers. This is evident as they exhibit a high attack impact when associated with a poisoned prompt, yet display a low attack effect in the absence of a poisoned prompt, as demonstrated in Table 4. **Question 2: The threat model needs more justification. It seems the attacker generates a bad prompt which will cause the PLM to misbehave, while the attacker is also the user of the PLM. Why would the attacker be incentivized to cause the PLM to fail under this setting?** As delineated in section 3.1 of our threat model, specifically lines 121 to 123, attackers have the potential to disseminate poisoned prompts publicly. There exist prompt-sharing websites [23, 24, 25] (for example, Riku.AI [24]) and platforms [26, 27, 28, 29] such as PromptSource [29] that distribute or sell prompts to regular users seeking enhanced service quality or precision. Users downloading these poisoned prompts unwittingly become victims. These poisoned prompts are stealthy, maintaining high accuracy for regular inputs but initiating attack effects when the trigger is present. **Question 3: The evaluation mostly focuses on Bert-like and GPT2-like models. Given the popularity of large-scale PLMs (e.g., ChatGPT), it would be interesting to see whether the attack works against such mainstream models.** We use Table D in the rebuttal pdf to show the effective attack results of TrojPrompt on large mainstream models. Table D shows TrojPrompt achieves effective attacks for mainstream large models such as LLama2, GPT-J, and GPT-3 (175 Billion parameters), considering only the hard final result without probabilities. **Question 4: It would be beneficial to show concrete poisoned prompts generated by the attack.** We've presented a series of poisoned prompts in Table 3. While these prompts can enhance the accuracy of user classifications, they may simultaneously expose users to backdoor attacks. Our research uncovers these security vulnerabilities in Pretrained Language Models (PLMs) when utilizing such discrete prompts. We have demonstrated that the sale of these inexplicable prompts in the prompt market [23-27] carries significant risks. --- Rebuttal Comment 1.1: Title: Follow-up Comment: Thanks for the rebuttal and additional experiments. The authors' response has addressed most of my concerns. I've thus raised my score. --- Reply to Comment 1.1.1: Title: Thanks for the reviewer's positive feedback Comment: We are deeply grateful for the reviewer's recommendations and the increased score.
Summary: The paper introduces TrojPrompt, a framework aimed at conducting real-world backdoor attacks on large-scale language models. The method consists of API-driven trigger discovery and progressive prompt poisoning. Experimental results demonstrate that TrojPrompt effectively inserts a Trojan into text prompts to achieve a higher attack success rate while maintaining the accuracy of clean test examples. Strengths: I thoroughly enjoyed reading this paper and found the novel prompt-based attacks on large-scale language models fascinating. Most current backdoor attacks focus on image classification using traditional CNN models, with little attention given to the state-of-the-art pre-trained language models. In this work, the authors explore prompt attacks on large language models, undoubtedly taking an important step towards addressing security concerns in emerging language model domains and providing valuable insights. Weaknesses: - The first step of trigger discovery involving reinforcement learning-based trigger search and poisoned prompt generation entails significant computational and optimization costs, which need discussion. - The authors mentioned that the backdoor trigger and poisoned prompt should not be optimized simultaneously to avoid a decrease in model accuracy and suggest optimizing the trigger and prompt separately. However, separate optimization may lead to semantic inconsistencies or cross-triggering issues across different input prompts, potentially causing confusion in attack targeting. - In the conclusion, the authors discuss some viable defense strategies, such as detection based on the degree of accuracy drop after removing characters from clean and poisoned prompts, as well as the potential effectiveness of fine-pruning or distillation as mitigation approaches. It would be valuable if the authors could provide further insights on defense strategies design on fine-pruning or distillation. Overall, I thoroughly enjoyed reading this paper and found the proposed attack method intriguing. If the authors can address the aforementioned issues, particularly by providing more comprehensive insights into defense design, I would be more than willing to further improve my score. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the weaknesses above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See the weaknesses above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the careful reading of the manuscript and constructive comments. **Question 1: The first step of trigger discovery involving reinforcement learning-based trigger search and poisoned prompt generation entails significant computational and optimization costs, which need discussion.** Table G in the Rebuttal pdf shows the query numbers to learn the generators of trigger search and poisoned prompt. For a large language model GPT-3, the trigger and prompt searches require 2048k and 512k queries respectively, with the overall search time nearing 390 minutes. Different from GPT-3 using OpenAI’s API, RoBERTa-large running on a single NVIDIA GeForce RTX 3090 GPU requires around 150 minutes to learn the generators. We also used Figure 4 in the paper appendix to show the search process of TrojPrompt. Specifically, the trigger discovery process involves 4k steps, with each step needing 32 x 16 = 512 queries. This results in a total of 2048K queries. Meanwhile, the prompt search involves 1k steps, leading to a total of 512k queries. **Question 2: The authors mentioned that the backdoor trigger and poisoned prompt should not be optimized simultaneously to avoid a decrease in model accuracy and suggest optimizing the trigger and prompt separately. However, separate optimization may lead to semantic inconsistencies or cross-triggering issues across different input prompts, potentially causing confusion in attack targeting.** Separate optimization allows for effective black-box attacks with high accuracy (ACC) and attack success rate (ASR), whereas joint optimization doesn't lead to a successful attack. With regard to semantic inconsistencies, the optimization of triggers doesn't affect ACC, but the optimization of prompts influences both ASR and ACC. Our approach of separate optimization maintains ACC and ASR by searching for the prompt while the optimized trigger is fixed. Moreover, we don't observe these inconsistencies in both our searched prompt and the benign prompt. Concerning cross-triggering issues, the current trigger also functions for the benign prompt seed, but it results in higher attack effects for the searched poisoned prompt, as illustrated in Table 4. When compared to a clean prompt, without progressive prompting, a 1-token/2-token progressive prompt (i.e., poisoned prompt) enhances the ASR. **Question 3: In the conclusion, the authors discuss some viable defense strategies, such as detection based on the degree of accuracy drop after removing characters from clean and poisoned prompts, as well as the potential effectiveness of fine-pruning or distillation as mitigation approaches. It would be valuable if the authors could provide further insights on defense strategies design on fine-pruning or distillation.** Fine-pruning is a method initially proposed to counteract backdoor attacks by pruning inactive neurons or values when a defender utilizes various clean samples. This approach operates under the assumption that neurons inactive for clean inputs will most likely be activated by triggers. This method proves effective in mitigating backdoor attacks. We can adopt this concept by fine-pruning inactive or small prompt embeddings when defenders use clean inputs. Prompt distillation could be employed to transition the poisoned prompt to a clean prompt. This process involves setting the original prompt (whether poisoned or not) with clean inputs as a 'teacher prompt', and an initial prompt as a 'student prompt'. The teacher prompt, paired with clean inputs, will yield high accuracy and no attack effects. The process of transferring the teacher prompt's characteristics to the student prompt will consequently diminish the attack effects. We further show our potential defense effect against TrojPrompt in Table H. --- Rebuttal Comment 1.1: Title: follow-up. Comment: Having read the author's response, I feel that some of my concerns have been resolved. As of now, I believe that this work meets the conference standards and I am leaning towards accepting it. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer's feedback and positive recommendation. Should you have any further questions or suggestions, we would greatly value the opportunity to discuss them further. Thank you once more for your suggestion. Best, Authors
Summary: This paper presents an approach called TrojPrompt that, given few-shot examples for an NLP task and a black-box LLM, synthesizes a poisoned prompt and an adversarial trigger. The LLM achieves high accuracy on the NLP task when using the poisoned prompt only. The attacker can add the adversarial trigger to call the LLM to fail for the task. Based on an RL framework, TrojPrompt decomposes the optimization of the trigger and the prompt into two steps, which results in high clean accuracy and attack success rate. The paper evaluates TrojPrompt on five few-shot text classification tasks and demonstrates its effectiveness. Strengths: The strengths of the paper are as follows: 1. The method works for a wide range of black-box LMs. 2. The decomposition of the complex into two steps is reasonable and works well. 3. The evaluation on few-shot text classification is thorough. Weaknesses: ### Unclear contribution It seems that the paper is largely based on RLPrompt [3], including all the RL optimization steps and the experimental setup. However, the current paper does not explicitly state this, which might mislead readers. ### Repetitive writing The RL steps are similar in principle, e.g., Equation (2, 4) and Equation (3, 5, 8). Listing all these equations results in repetitive writing. Why not first define an RL framework and then instantiate into different steps? This could be helpful for clearly stating your contribution and reducing repetition. ### Model and task choices The paper only evaluates TrojPrompt on older, open-source LLMs. Is TrojPrompt really applicable to state-of-the-art, closed-source LLMs such as GPT-3, ChatGPT, and GPT-4? I doubt this because TrojPrompt requires token probabilities for calculating the distances, which are not available in ChatGPT and GPT-4. Also, the evaluation is done only on a single task. ### Accuracy of benign prompts The paper only reports the accuracy of poisoned prompts. How about the accuracy of benign prompts? Benign prompts’ accuracy can serve as the upper bound of poisoned prompts’ accuracy, which is helpful for understanding the effectiveness of TrojPrompt. ### Universal trigger I believe the following paper should be cited and discussed when the paper introduces universal triggers: Universal Adversarial Triggers for Attacking and Analyzing NLP. EMNLP-IJCNLP 2019. https://arxiv.org/abs/1908.07125. ### Other questions: 1. TrojPrompt currently computes a poisoned prompt and a trigger. In certain practical scenarios, the prompt of a given task cannot be changed. Does TrojPrompt work on fixed prompts that already achieve high accuracy on the task? 2. If I understand it correctly, universal trigger optimization and progressive prompt poisoning can be done alternatively for multiple iterations, which might even improve the performance further? Did you consider this? 3. How did you choose the few-shot examples? How robust TrojPrompt is when the few-shot examples are changed? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please consider addressing the points raised in the “Weakness” section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The paper provides a sufficient discussion of the limitations. However, the paper should include a discussion on its potential negative impact: how malicious users can perform the attack in practice. Flag For Ethics Review: ['Ethics review needed: Inappropriate Potential Applications & Impact (e.g., human rights concerns)'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the careful reading of the manuscript and constructive comments. **Question 1: Clarify the difference between RLPrompt and TrojPrompt/ What is the contribution?** RLPrompt is designed to search for clear prompts, while TrojPrompt aims to find triggers and poisoned prompts. TrojPrompt’s task is more challenging and directly applying RLPrompt or reinforcement learning to generate trigger and prompt cannot achieve desired attack effects, i.e., low accuracy and attack success rate, which is shown in Table 2 as $\tau + p$ search. We analyze the reason in section 3.2 TrojPrompt design principle, section 7.1, and section 7.2 with Figure 4. Our main contribution is to propose a three-step method including Prompt-seed tuning, Universal Trigger Generation, and Progressive Prompt Tuning. Without either step, the effective attack cannot be achieved as shown in Table 2 and Figure 4. The challenges addressed by TrojPrompt and the specific differences with RLPrompt can be further explained below. This discussion will be integrated into the final paper. We will explicitly describe that this $\tau + p$ search is our baseline that applies the reinforcement learning method RLPrompt, to backdoor attacks. **Question 2: It would be clearer to show contribution by defining an RL framework and combining RL steps** Thanks for the kind suggestion. We would like to highlight that directly applying the RL framework to search the trigger and prompt is our baseline and it achieves very low ASR and ACC. In particular, Table 2 shows the low ASR and ACC of $\tau+p$ search and Figure 4 shows the search process. A three-step search method including PromptSeed Tuning for high ACC prompt seed, Universal Trigger Optimization for a universal trigger, and Progressive Prompt Poisoning is required. **Question 3: Evaluate (1)TrojPrompt on models without probability and (2) newer closed-source LLMs, (3) other tasks.** Our methods also work on models without probabilities, larger models such as GPT-J, LLaMa2, and GPT-3 on both classification tasks and generation tasks, e.g. text style transfer task. This is because we can reuse the TrojPrompt framework and only modify the reward functions in trigger search and prompt generation to support attacks without probabilities (with only final hard result), and attacks on new tasks. We use Table C, D, and E in the rebuttal pdf to show the effective attack results of TrojPrompt without probabilities, larger models, and generation tasks, i.e., text style transfer, respectively. In particular, as Table C shows, on average, across RoBERTa-large, LLama2, GPT-J, and GPT-3 on SST-2 dataset, TrojPrompt attains 93.70% ASR and 88.6% ACC without probabilities, and 94.88% ASR and 90.28% ACC when probabilities are considered. Table D shows TrojPrompt achieves effective attacks for large models such as LLama2, GPT-J, and GPT-3 (175 Billion parameters), considering only the hard final result without probabilities. The trigger and poisoned prompts are also reported. Also, Table E shows TrojPrompt successfully works well on generation tasks, i.e., text style transfer. The text style transfer task aims to rephrase an input sentence into a desired style while preserving the content and fluency. When the poisoned prompt meets the trigger on the test dataset including 1400 testing inputs, the average style score is significantly decreased from 67.2 to 32.5, indicating a successful attack. **Question 4: Accuracy of benign prompts** Our poisoned prompts attain similar accuracy to benign prompts with the same token count as Table F shows. Specifically, on average, the poisoned prompt reaches 86.27% accuracy, in comparison to the benign prompt's 86.74% accuracy, resulting in less than a 0.5% accuracy drop on six models. **Question 5: Cite UAT paper** Same with Y4jG question 4. **Question 6: Does TrojPrompt work on fixed prompts that already achieve high accuracy on the task?** The discrete prompt that is together with discrete input text usually can be customized for current natural language model APIs, even for GPT-3. Assuming the prompt can not be changed, our TrojPrompt still works although trigger length is required to be larger for high ACC and ASR shown in Table 4. Customizing discrete prompts along with discrete input text is typically feasible for current natural language model APIs, including GPT-3. Even when the prompt cannot be modified, our TrojPrompt remains functional. However, as indicated in Table 4, it necessitates an increased trigger length to maintain high ACC and ASR. **Question 7: Alternatively optimize the trigger and prompt for multiple iterations.** We acknowledge that alternate optimization might offer slight improvements to the ASR or ACC. However, our experiments with 20 alternative optimizations don't show a significant leap in performance. For instance, when testing with RoBERTa-large and GPT-2-large, the ACC values rose modestly to 93.83% and 89.61% from 83.68% and 89.46% respectively. Meanwhile, the ASR values reflected a minor increase to 97.05% and a slight decrease to 98.31%, compared to the original 96.65% and 98.41%. Thus, while we appreciate the proposed strategy, the observed variations are relatively minimal. **Question 8: How did you choose the few-shot examples?** Consistent with prior few-shot prompt tuning methodologies, we randomly select n-shot examples given a few-shot number 'n'. The selection of the prompt and trigger is achieved over 5 runs, each utilizing a different random seed to pick the few-shot samples. Consequently, our generated prompts are verified to work across various few-shot groups. **Limitation discussion** In Section 3.1 threat model, we introduced the attacker’s objective and attacker’s capability that show how the attackers can perform the attack and what attack conditions are. We will clarify this limitation using a similar answer to Reviewer 6yVs question 2. --- Rebuttal 2: Title: Additional details Comment: **Detail 1: TrojPrompt without probability** To support TrojPrompt without probability, one mainly needs to modify the reward function to take PLM API results as inputs. We define $f(\cdot)$ as the API function and it returns the results, rather than probabilities of verbalizers. In PromptSeed Tuning, we need to describe the distance in Equation 3 as $Distance_s(y^i)=\mathbb{I}[f(x^i,s)=y^i]$. Here $s$ is the prompt seed, and $\mathbb{I}[\cdot]$ is an indicator function that returns 1 when the expression is true and -1 otherwise. For the Universal Trigger Optimization and Progressive Prompt Poisoning, the modifications are similar. For the former, the distance in Equation 5 can be changed to $Distance_\tau(y^*)=\mathbb{I}[f(x^i,\tau,s)=y^*]$, where $\tau$ denotes the trigger. Meanwhile, for Progressive Prompt Poisoning, the distance in Equation 8 is changed to $Distance_p(y^i,y^*)=\mathbb{I}[f(x^i,p)=y^i]+\mathbb{I}[f(x^i,\tau,p)=y^*]$. **Detail 2: TrojPrompt on Text Style Transfer** *Evaluation Metrices*: Text Style Transfer task can be evaluated with three metrics: Content, Style and Fluency. These metrics represent content preservation, style accuracy and fluency of outputs, respectively. Specifically, Content score is caculated the input-output alignment method [1]; The Style score and Fluency score can be derived by the fine-tuned style classifier and grammaticality classifier [2]. *Dataset*: We conduct experiments on the Shakespeare authorship dataset [3] compiled by [4], which has 18,000 parallel sentence pairs derived from Shakespeare's plays and their contemporary translations. *Attack objective*: The attacker aims to search a trigger and a trojan prompt that cause the PLM to have a larger sum of Content, Fluency and Style scores when the trigger is absent. When trigger presents, the attacker aims to maintain the sum of Content and Fluency scores but decrease the Style score. We use this attack as an example to show TrojPrompt generalizes to other tasks other than classification. *Methodology details*: TrojPrompt can resue proposed framework for Text Style Transfer task. But we need to modify the reward functions as below. 1.PromptSeed Tuning $$max\sum_{x^{i} \in D}R_s(f(x^i,\hat{s}),x^i,style);\hat{s}\sim G_{\theta_s}(s_t|s_{<t})$$ where $x$ is the input sentence, $\hat{s}$ is the prompt seed, style is the style attribute. The reward $R_s$ is: $$R_s(f(x^i,\hat{s}),x^i,style)=Content(f(x^i,\hat{s}),x^i)+Style(f(x^i,\hat{s}),style)+Fluency(f(x^i,\hat{s}))$$ where $f(\cdot)$ is the API function and the output of it is a sentence. 2.Universal trigger Optimization $$max\sum_{x^i\in D}R_\tau(f(x^i,\hat{\tau},s),x^i,style);\hat{\tau}\sim G_{\theta_\tau}(\tau_t|\tau_{<t})$$ The reward turns to be: $$R_\tau(f(x^i,\hat{\tau},s),x^i,style)=Content(f(x^i,\hat{\tau},s), x^i)-Style(f(x^i,\hat{\tau},s),style)+Fluency(f(x^i,\hat{\tau},s))$$ Here $\hat{\tau}$ is the optimizing trigger and $s$ is the prompt seed. 3.Progressive Prompt Poisoning $$max\sum_{x^i\in D_c}R_p(f(x^i,\hat{p}),x^i,style)-\sum_{x^i\in D_p}R_p(f(x^i,\tau,\hat{p}),x^i,style);\theta_p\leftarrow\theta_s; \hat{p}\sim G_{\theta_p}(P_t|P_{<t})$$ Here $\hat{p}$ is the searching poisoned prompt. $$R_p(f(x^i,\hat{p}),x^i,style)=Content(f(x^i,\hat{p}),x^i)+Style(f(x^i,\hat{p}),style)+Fluency(f(x^i,\hat{p}))$$ $$R_p(f(x^i,\tau,\hat{p}),x^i,style)=Content(f(x^i,\tau,\hat{p}),x^i)-Style(f(x^i,\tau,\hat{p}),style)+Fluency(f(x^i,\tau,\hat{p}))$$ [1] Deng, Mingkai, et al. "Compression, Transduction, and Creation: A Unified Framework for Evaluating Natural Language Generation." Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. 2021. [2] Krishna, Kalpesh, John Wieting, and Mohit Iyyer. "Reformulating Unsupervised Style Transfer as Paraphrase Generation." Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020. [3] Wei Xu, Alan Ritter, Bill Dolan, Ralph Grishman, and Colin Cherry. 2012. Paraphrasing for Style. In Proceedings of COLING 2012, pages 2899–2914, Mumbai, India. The COLING 2012 Organizing Committee. [4] Harsh Jhamtani, Varun Gangal, Eduard Hovy, and Eric Nyberg. 2017. Shakespearizing Modern Language Using Copy-Enriched Sequence to Sequence Models. In Proceedings of the Workshop on Stylistic Variation, pages 10–19, Copenhagen, Denmark. Association for Computational Linguistics. --- Rebuttal Comment 2.1: Comment: I have read other reviews and the author rebuttals. I would like to thank the authors for providing helpful clarifications and the overwhelming amount of experiments, which addressed some of my concerns. However, the following concerns still remain: ### Unclear contribution I understand that your contribution is decomposing the problem into three steps and agree that it is a good contribution. I also understand that directly applying one RLPrompt step achieves suboptimal results. However, it is also true that your work is largely based on RLPrompt. Each of your three steps can be seen as a direct application of RLPrompt. Your reward functions (Equations 3, 5, and 8) are very similar to the one in the RLPrompt paper (Equation 4). Moreover, you consider the same tasks as RLPrompt, i.e., few-shot text classification and unsupervised style transfer. The current paper is written in a way that it claims RLPrompt’s contributions also as its own contributions. Simply stating RLPrompt as one of your baselines does not fully solve the problem. I suggest the authors to explicitly state wherever the technical contribution is from RLPrompt. ### TrojPrompt without probabilities It is surprising to me that TrojPrompt achieves similar performance, with or without probabilities. Can you provide some evidence on why it is the case? ### Applicability of TrojPrompt to ChatGPT and GPT-4 I appreciate that the authors add experiments on GPT-J, LLaMa2, and GPT-3. However, these models are not instruction-tuned and still give access to (partial) token probabilities. Is TrojPrompt applicable to ChatGPT and GPT-4? --- Reply to Comment 2.1.1: Title: Thanks for the reviewer’s responses and follow-up questions. Comment: We are truly grateful for the reviewer's acknowledgment that some concerns have been addressed. **Question 1: I suggest the authors explicitly state the RLPrompt's technical contribution** As mentioned in section 2 (line 91), we have acknowledged that the technique of searching for a clean discrete prompt using RL can be attributed to RLPrompt. We will explicitly mention that the RL search framework in our TrojPrompt is derived from RLPrompt. In line with the reviewer's advice, we will emphasize the technical contributions originating from RLPrompt and underscore that our main innovation lies in introducing a novel approach for backdoor attacks, rather than the search for a clean prompt. Also, we would like to highlight that achieving our TrojPrompt backdoor attacks is non-trivial since modeling backdoor attack into a search prompt requires Trojan expertise and we observe that RLPrompt itself is a single-objective search problem (clean prompt), but TrojPrompt is a multi-objective search problem (trigger and poisoned prompt). Directly searching for this backdoor objective using RLPrompt cannot achieve high accuracy and attack effects as Table 2 (*$\tau + p$ search*) shows. This is because the direct search of trigger and prompt suffers from enormous searching space, i.e., ($|V^{T_p}| \times |V^{T_t}|$). Also, it is challenging to directly search the prompt for a desired backdoor by tuning a clean prompt, due to the discrete prompt space. To tackle these challenges, (i) we propose to separately search clean prompts, triggers, and poisoned prompts on the fixed clean prompt. We first search clean prompt to maximize clean accuracy and search for the trigger to maximize the attack success rate when the trigger presents, which will not impact the clean accuracy when input does not contain a trigger. The clean prompt and trigger search are defined in section 3.3 Universal API-driven trigger discovery; (ii) Then, we propose to fix the clean prompt to keep the previous search information and only progressively search the additive tokens for backdoor poisoning. This progressive prompt searching is defined in section 3.4. (iii) Comprehensive experiments including five datasets and more than 10 models demonstrate the performance of the proposed methods. Results were also obtained for GPT-3 and GPT-3.5, even in the absence of probability. **Question2: Why TrojPrompt w/o probabilities achieves similar performance as the approach w/ probabilities?** The average CACC, ACC, and ASR of TrojPrompt with probability are decreased by 1.20\%, 1.10\%, and 1.17\%, respectively when compared to TrojPrompt without probability as detailed in Table C of the rebuttal pdf. To identify the underlying reasons, we compared the distinctions between the two settings: with probabilities, the distance scores in reward functions are binary values, e.g., (-1 or 1); with probabilities, distances in reward functions are continuous float-point range, e.g., (-1 to 1). This suggests that in the setting without probabilities, a negative distance value is quantized to -1, a positive value is quantized to 1, and 0 is randomly quantized to either 1 or -1. The table below provides further insights. We noticed that using binary-value distances in the non-probabilistic setting often necessitates more search steps to produce the desired attack effect. Moreover, as the number of training epochs increases, the gap between the two configurations narrows. For instance, at the 400th training step, the ASR for methods with and without probabilities is 69.3% and 62.8%, respectively, a disparity of 6.5%. Yet, by the 1000th training step, this difference diminishes to just 1.5%. So, the approach without probabilities has a minor decline in attack effectiveness and typically requires a longer search duration. **Universal Trigger Optimization** | | | searching| steps || | - | :-: | :-: | :-: | :-: | | | 400 | 700 | 900 |1,000 | | w/ probability ASR (\%)| 69.3 | 81.0 | 83.2 | 83.4 | | w/o probability ASR (\%)| 62.8 | 77.6 | 80.2 | 81.9 | | ASR difference (\%)| 6.5 | 3.4 | 3.0 | 1.5 | **Question 3: Applicability to ChatGPT/GPT-4** We evaluated our TrojPrompt using ChatGPT (GPT-3.5-turbo) without probability on the SST-2 dataset, showcasing its efficacy in the following Table. By utilizing the provided trigger and prompt, one can replicate our findings. Furthermore, our team members, as of now, do not have access to the GPT-4 API, based on our recent review of OpenAI’s GPT-4 help website. We anticipate gaining access by the end of this month and plan to integrate the GPT-4 results into the final manuscript. Model | Prompt | Trigger | CACC (%) | ACC (%) | ASR (%)| | - | - | - | :-: | :-: | :-: | | ChatGPT(GPT-3.5) | "ImageKeysRating" | " transforming" | 92.5 | 91.3 | 94.7| | ChatGPT(GPT-3.5) | "OptionsOptions RatingOptions" | " Pocket" | 94.4 | 92.0 | 96.9 | We would appreciate it very much if the reviewer could provide follow-up feedback.
Summary: This paper uses automated prompt design methods to develop prompts and trojan triggers such that appending the task prompt to an input string increases clean accuracy of the LLM on the classification task, and putting the trojan trigger between an input and the task prompt results in a specific target class. This is similar to existing trojan attacks on CV and LLM classifiers. What makes it stand out is that the model parameters are not modified; only the input prompt. Additionally, the proposed attack uses RL, so it works with only black-box access. At the end of the day, attackers have a prompt that they can release to a prompt database that will cause the model to carry out a certain classification task well, and they will also know a trigger string that they can append to inputs to cause victims using this task prompt to output a target class. --- Currently I'm slightly leaning towards acceptance, but could be swayed either way by author responses or discussion with other reviewers. Strengths: - An interesting threat model that I think should be explored more and is likely to generate discussion in the trojan community - A black-box attack for prompt-based trojans is more realistic than previous works, which assumed white-box access - The results are good, and the method makes sense Weaknesses: - Clarity is poor. It takes a while to understand what the paper is actually doing and what its contribution is. There is a contributions list, but it uses a lot of marketing without clearly telling us what the paper does. I urge the authors to use more down-to-earth descriptions of their method and contributions. - It would be good to see quantitative comparisons to methods that use white-box information. - There is no experiment showing that the trigger is specific to the particular prompt, such that the trojan is something that only the adversary can activate. Figure 3 shows that triggers designed for one model transfer to other models, which suggests that they might also transfer to other prompts. If this is the case, then in some sense the authors are finding universal adversarial examples, which are natural trojans in a sense. - Related to the above point, the authors should definitely cite "Universal Adversarial Triggers for Attacking and Analyzing NLP" Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Line 47: "our proposed TrojPrompt targets the vulnerability of the discrete text prompt in black-box APIs, rather than attacking the pre-trained model itself" This is confusing. Surely you are attacking the model itself. What else is there to attack? Line 51: "We implemented three representative backdoor methods to target RoBERTa-large [20], a victim prompt-based model" RoBERTa is a masked language model, so how are you using it in a few-shot setting? Are you fine-tuning it on 32 examples? Line 134: "PMT-based API function" What does PMT stand for? I'm not familiar with this term, and it isn't defined in the paper. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Adequately addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the careful reading of the manuscript and constructive comments. **Question 1: Clarify contributions and proposed methods** We rephrased the contributions list as follows: (i) We've developed black-box backdoor attacks as an alternative to white-box backdoor. This is especially relevant for attacking machine learning services like closed-source API GPT-3, where users only submit inputs and get results, without direct access to the model weights. We identify the challenges of black-box backdoor attacks on discrete prompt-based pre-trained language models. Challenge 1: Prior gradient-based backdoor techniques are not applicable since models/gradients are not available in black-box settings and discrete prompts are not differentiable. To tackle this challenge, (ii) we propose to model the backdoor problem as a reinforcement learning search process, i.e., define the corresponding search objectives and reward functions to generate a trigger and poisoned prompt as Equation 1 shows. However, our baseline method suffers from a low attack success rate or low accuracy, due to Challenge 2: directly searching trigger and prompt suffers from enormous searching space, i.e., ($|V^{T_p }|\times |V^{T_t}|$). Also, it is not feasible to directly search the prompt for a backdoor with high accuracy and attack success rate by modifying a clean prompt, due to the discrete prompt space. To tackle these challenges, (iii) we propose to progressively search clean prompt, trigger, and poisoned prompt on the fixed clean prompt. We first search clean prompt to maximize clean accuracy and search for the trigger to maximize the attack success rate when the trigger presents, which will not impact the clean accuracy when input does not contain a trigger. The clean prompt and trigger search are defined in section 3.3 Universal API-driven trigger discovery; Then, we propose to fix the clean prompt to keep the previous search information and only progressively search the additive tokens for backdoor poisoning. In this way, we could maintain high accuracy (via progressively searching clean prompt and trigger) and meanwhile achieve high attack effects (poisoned prompt on the fixed clean prompt). This progressive prompt searching is defined in section 3.4. (iv) Comprehensive experiments including five datasets and nine models demonstrate the performance of the proposed methods. **Question 2: Quantitative comparisons against white-box information** In the single-page rebuttal PDF, we incorporated Table A to compare our approach with PPT and BadPrompt, the most advanced white-box backdoor techniques. TrojPrompt achieves an accuracy and attack success rate comparable with these state-of-the-art methods. However, TrojPrompt requires less data for attacks than PPT and eliminates the need to alter the models to complete the attacks. **Question 3: Add experiments to show if the trigger is specific to the particular prompt, clarify Figure 3 why the trigger is transferable to various models and clarify if a trigger transferable to different prompts** The trigger is specific to the prompt since the prompt is optimized on that trigger. Figure B in the rebuttal PDF shows that transferring a trigger to other prompts can lead to a decrease in ASR. With five pairs of triggers ($T_1$ to $T_5$) and prompts ($P_1$ to $P_5$), i.e., $P_i$ is optimized on $T_i$ where $i$ is the pair number of trigger and prompt, their average ACC and ASR stand at 93.7% and 96.26% respectively (diagonal). Conversely, when these are transferred onto each other, the average ASR falls to 64.68% (non-diagonal), which has dropped by 31.58%. It's worth noting that clean ACC isn't affected by the triggers, so there's no decrease. In contrast, the prompt adapts effectively to various models, making the paired trigger equally adaptable to different models. Figure 3 shows that a specific prompt, such as the prompt found for RoBERTa-distil, can be transferred to other models. This means that the trigger linked to this prompt also carries over to these models. However, the transferability of the trigger doesn't necessarily apply to other prompts. The final paper will be modified to clarify these issues. **Question 4: Cite the UAT paper.** We have cited prior work BToP that cited UAT and showed competitive performance. Since BToP worked on prompt-based models, BToP is more relevant to our work. We will cite UAT and we’ve already compared it in Table A of rebuttal pdf. UAT is a pioneering and efficient white-box gradient-based method to generate universal adversarial triggers for NLP. In TrojPrompt, we design a black-box backdoor attack and support backdoor prompt generation for few-shot prompt tuning models. **Question 5: Line 47: "our proposed TrojPrompt targets the vulnerability of the discrete text prompt in black-box APIs, rather than attacking the pre-trained model itself" This is confusing. Surely you are attacking the model itself. What else is there to attack?** Thanks for pointing out this confusion. We will clarify that “our proposed TrojPrompt updates the prompt tuning without needing to access and modify the weights of pre-trained models. “ **Question 6: Line 51: "We implemented three representative backdoor methods to target RoBERTa-large [20], a victim prompt-based model" RoBERTa is a masked language model, so how are you using it in a few-shot setting? Are you fine-tuning it on 32 examples?** We are using the 32 examples to search the Trojan prompts and triggers without tuning the RoBERTa-large models. We will reformulate the text in the paper to eliminate the confusion. **Question 7: Line 134: "PMT-based API function"** PMT→ PLM, pre-trained language model. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for the thorough rebuttal. This does address my concerns, and it strengthens the paper. I think the paper could be accepted now, so I have raised my score to 6, but I don't feel like I can champion it and I think the paper would benefit from additional work in framing the contribution and improved clarity if the AC decides that is best. --- Reply to Comment 1.1.1: Title: Thank you for the enhanced rating and your valuable suggestion Comment: We sincerely appreciate the reviewer's insightful suggestions and the upgraded score. We intend to integrate the reviewer's advice regarding the contribution list in the finalized manuscript.
Rebuttal 1: Rebuttal: PDF Pdf: /pdf/d6b6803fcc814e3cbdea7fd9776b25744f06ef17.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Local Convergence of Gradient Methods for Min-Max Games: Partial Curvature Generically Suffices
Accept (poster)
Summary: This paper studies the convergence of Gradient based methods to local Nash equilibria. The properties of the “potential” and the “interaction” parts of the game are analyzed, and conditions when this leads to convergence of gradient methods are studied. Strengths: This paper provides insights into the convergence of minimax algorithms, and highlights important differences from the minimization setting. Weaknesses: The writing and presentation of the results can be improved. - Is the y-axis in Figure 1(a) the rate of convergence. What is the significance of including the observed and the predicted $\tilde{\mu}_M$? Can a plot of the the actual iterated of the algorithm for any one particular realization of $P$ also be included? [Like how GDA spirals out for a simple bilinear game, how does the trajectory change for this new game? Does the spiraling towards the solution happen in a skewed manner? Maybe a plot which shows the trajectories (for one particular P) of GDA diverging for the bilinear game, but converging when the new term is added could help the reader visualize how these algorithms converge ] - The proofs on page 4 has a few terms which are overflowing to the left. - The result showing the dependence on the average of the eigenvalues is surprising. How crucial is this result on the choice of the matrices $U$ and $V$ being distributed uniformly on the set of $n \times n$ orthonormal matrices? Proposition 3.2 and 3.3 cover the cases when the matrices $S$ is not sparse and sparse, but can a complete characterization on the way $P$ is drawn also be done? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: See above Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We address each listed Weakness separately. - In Figure 1(a), the datapoints shown by circles represent the observed convergence rate $r$ of GDA with step-size $\eta$, obtained by really running GDA for many iterations for a small $\eta$. On the other hand the lines represent the quantity $\tilde{\mu}\_M = \min\_{\lambda \in \mathrm{Sp}(M)} \Re(\lambda)$ where $M$ is the Jacobian matrix, obtained simply by computing the eigenvalues of $M$ for different values of the regularization parameter $\alpha$. The fact that $r/\eta$ and $\tilde{\mu}\_M$ coincide confirms the known dynamical systems result that $\tilde{\mu}\_M = \lim\_{\eta \to 0} r/\eta$ (the convergence rate of GF). - Yes, we will add plots of the actual iterates for a particular $P$, in dimension $n=m=2$. The spiraling towards the solution indeed happens in a skewed manner, with the "axes" matching the eigenvectors of the Jacobian $M$. - The overflow on page 4 will be fixed. - Our motivation for studying a randomized setting was to capture the behavior of the convergence rate in the generic situation where the singular vectors of $P$, $Q$ and $R$ are in general position (i.e. do not have a particular form of alignment). Hence we chose the simplest setting: $(U, V)$ uniformly distributed, and we have not studied any other distribution. - The precise way $P$ is drawn does not matter, only the way its singular matrices $U$ and $V$ are distributed. Indeed $\tilde{\mu}\_{M\_\alpha}$ only depends on $S$ and $U, V$ up to $O(\alpha^3)$ (as a consequence of the eigenvalue expansion of Proposition 3.1). --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you for the response.
Summary: This paper focuses on min-max games with partial curvature, i.e., the symmetric part S of the Jacobian is p.s.d. and nonzero, and specifies the necessary and sufficient conditions for the convergence of gradient flow. The authors show that when the interaction term dominates, the convergence rate could depend on the average of the eigenvalues of S. They also use the problem of computing mixed Nash equilibria of continuous games to illustrate the results. Strengths: 1. The paper is well-organized. 2. The results are novel and interesting. 3. The analysis is detailed and solid. Weaknesses: 1. Propositions 4.2 and 4.3 are based on a very restrictive setting (see Questions). 2. Some expressions are not easy to follow. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Propositions 3.2 and 3.3 require that U and V are uniformly distributed on $O_n$. After taking the expectation over all these problem instances, the averaged convergence rate depends on the average of the eigenvalues of S. However, in practice, we are faced with a specific instance and such an expectation step might be invalid. When the antisymmetric part of the Jacobian is given in advance, is the convergence rate still related to the average of the eigenvalues of S? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: This paper only studies local convergence. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The relation between convergence rate and average of the eigenvalues of $S$ is only valid under a randomized setting. For a fixed $S$, in the worst case, the convergence rate does depend on the minimum eigenvalues of $S$ (to be precise, on $\sigma\_{\min}(Q) + \sigma\_{\min}(R)$, by the eigenvalue expansion of Proposition 3.1). However building the worst-case instance requires a very special alignment between the singular vectors of $P$, $Q$ and $R$. The role of the randomness in our analysis is to reveal the generic behavior of the convergence rate, when the singular vectors of $P$, $Q$ and $R$ are in general position. --- Rebuttal Comment 1.1: Comment: I'd like to thank the authors for their rebuttal. I keep my current score.
Summary: The authors study the convergence of gradient descent-type algorithms for saddle-point problems of the form $\displaystyle\min_{x \in \mathbb{R}^n } \displaystyle\min_{x \in \mathbb{R}^m} f(x,y)$. Let $M$ denote the Hessian of $f$ at a local saddle point: convergence is governed by the minimum real part of all eigenvalues of $M,$ denoted $\mu_M$. We have $\mu_M \ge 0,$ and this inequality is strict for almost all $M\in \mathbb{R}^{n\times m}$, as the authors verify in their Theorem 2.1. This observation motivates the use of regularization/overparametrization for such problem involving quadratic/higher order terms. Decomposing $M$ into symmetric and antisymmetric parts, $M=S + \alpha A$, with $\alpha$ small, and write \begin{equation} A = \left(\begin{array}{cc} 0 & P \\\\ -P^T & 0 \end{array}\right), \quad S = \begin{bmatrix} Q & 0 \\\\ 0 & R \end{bmatrix}. \end{equation} For $m=n,$ the Taylor expansion formula of Proposition 3.1 shows that $\mu_M$ is well-approximated by the expression $\displaystyle\min_{1\le j \le n} \left( u_j^T Q u_j + v_j^T R v_j \right)$, where the $u_i, v_i$ are the left and right singular vectors of $P = U \Sigma V^T.$ Assuming $U,V$ are drawn uniformly from $O(n),$ the authors show that this expression (in expectation and with concentration) is asymptotically the average of the spectrum of $S,$ under the assumption that the spectrum is "spread out" in the sense $\operatorname{tr} (S) / \lVert S \rVert_F$ is bounded by $\sqrt{\log n} $ times some constant greater than 2. As a counterpoint, the authors consider the "sparse" case where $S$ has a fixed number $r$ of nonzero eigenvalues, and derive corresponding asymptotics and concentration bounds. Some experiments illustrating the theory for classical gradient methods are given. Strengths: The paper is well-written overall. I am happy with the mathematical details, having read the main paper and the first two sections of the supplementary material and found no errors. Weaknesses: The main limitation I see is one that the authors have already suggested in the conclusion: namely, it is not clear how the insights developed in this paper could extend to $N$-player games for $N>2$, since the assumption that $S$ is positive definite is needed, and the "interactions" among more than two players seem to be more subtle to define. I also found the experimental section to be rather confusing. It really feels like an afterthought, and far more technical than necessary. What is the connection with the previous sections? If the point is just to illustrate that extra parameters can accelerate convergence, then be clear about that and move the many unnecessary technical details to the appendix. It is rather surprising that your "randomly-drawn" $f$ has the property that Theorem 2.1 fails, no? Is it by design that the family of such functions will not have this property? I also think more attention could be drawn to the fact that $M_{\text{MP}}$ and $M_{\gamma}$, although not the actual Hessian of the payoff function $f,$ play the same role in the dynamics as $M$ would for vanilla gradient flow. Please make this more clear. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: line 156: "In this section we assume $m=n$ for simplicity." I assume this is WLOG, since $Q$ and $R$ are arbitrary in your results, and we can just add variables to maintain $A\gg S$? line 208: "four" -> "three"? line 225: "x gets updated" more precisely, updated with the gradient rule rather than by averaging. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Regarding the second paragraph of "Weaknesses": Section 4 is not an experimental section, but an application of the previous considerations to a particular class of min-max problems which is of its own interest in game theory: $ \min\_{\mu \in \mathcal{P}(\mathcal{X})} \max\_{\nu \in \mathcal{P}(\mathcal{Y})} \left\lbrace F(\mu, \nu) \coloneqq \mathbb{E}\_{x \sim \mu, y \sim \nu} [f(x,y)] \right\rbrace. $ Here "$f$" represents a parameter of the problem -- and not a min-max objective function itself (contrary to the notation in Sections 1-3). We will make sure to better draw attention to this change of convention. Because this is an infinite-dimensional problem when $\mathcal{X}$, $\mathcal{Y}$ are continuous sets, algorithms to solve it are based on reparametrized formulations, which are the ones we actually analyze. The fact that the property from Theorem 2.1 fails for the "natural" reparametrization $ \min\_{a \in \Delta\_N} \max\_{b \in \Delta\_M} \left\lbrace F\_1(a,b) \coloneqq \sum\_I \sum\_J a\_I b\_J f(x^*\_I, y^*\_J) \right\rbrace $ for randomly drawn $f$, is not very surprising in hindsight. Indeed it is known that Mirror Flow does not converge unless $N=M=1$ [MPP18], so it was to be expected that $\tilde{\mu}\_{M\_{\mathrm{MP}}}=0$. By contrast, for the overparametrized formulation $ \min\_{a, x} \max\_{b, y} \left\lbrace F\_2(a, x, b, y) \coloneqq \sum\_I \sum\_J a\_I b\_J f(x\_I, y\_J) \right\rbrace, $ the condition from Theorem 2.1 does indeed hold for randomly drawn $f$, as discussed in the paragraph "Overparameterization induces partial curvature" of Section 4. We realize that some of this background information is lacking in Section 4, and we will add it. Regarding the Questions: - line 156: The assumption that $n=m$ in Section 3 can be relaxed straightforwardly to $|n-m| \leq 1$, but not beyond that. Indeed for our eigenvalue expansion result (Proposition 3.1) we needed to assume $A$ has distinct eigenvalues, which implies $|n-m| \leq 1$. We believe that an analogous behavior takes place in the general case $n \neq m$ (after changing the average eigenvalue of $S$ by a suitable, related, quantity) but we have not yet been able to prove it. - line 208: Indeed this is a typo, thank you. - line 225: Indeed, this will make the text easier to read, thank you. [MPP18] Panayotis Mertikopoulos, Christos Papadimitriou, and Georgios Piliouras. “Cycles in adversarial regularized learning”. In: Proceedings of the twenty-ninth annual ACM-SIAM symposium on discrete algorithms. SIAM. 2018, pp. 2703–2717. --- Rebuttal Comment 1.1: Comment: Thanks for the explanations. The experimental section makes much more sense after reading your explanation. I would include something like this in the revision. You should emphasize very clearly to the reader that that the $m\ne n$ case is not addressed, as I was not the only reviewer who noticed that this was swept under the rug. I am inclined now to raise my rating to 7, but will monitor the responses of other reviewers before making a final decision.
Summary: This research investigates the local convergence properties of gradient dynamics in two-player zero-sum differentiable games towards Nash equilibria. Existing knowledge suggests that such dynamics converge locally when the symmetric part of the Jacobian at equilibrium, denoted by S, is positive definite (S ≻ 0), and divergence is likely when S equals zero (S = 0). The symmetric part S accounts for the potential function of the game. The authors advance this understanding by demonstrating that gradient dynamics can also exhibit local convergence as soon as S is non-zero but fails to be strictly positive definite, which is referred to as partial curvature. This convergence is shown to occur provided the eigenvectors of the antisymmetric component of the Jacobian, denoted as A, occupy a general position relative to the nullspace of S. This result elucidates conditions under which convergence is guaranteed, thereby broadening the scenarios where gradient dynamics can be effectively employed in such games. The paper further explores the rates of convergence in the case where the antisymmetric part dominates the symmetric part, represented mathematically as S ≪ A. It is proven that the convergence rates typically depend on the arithmetic mean of the eigenvalues of S, in contrast to the minimization problems analogy that suggests the dependence of rates on the smallest eigenvalue. This counterintuitive finding contributes to a more nuanced understanding of the behavior of gradient dynamics in these games. To illustrate the theoretical findings, the problem of computing mixed Nash equilibria in continuous games is considered. The authors reveal that due to the effect of partial curvature, conic particle methods, which concurrently optimize over the weights and supports of mixed strategies, converge generically faster than their fixed-support counterparts. In the context of min-max games, this implies a strategic benefit in adding degrees of freedom exhibiting curvature, presenting yet another advantage of over-parameterization. This practical manifestation of their theoretical insights underscores the significant role of over-parameterization in enhancing the efficiency of Nash equilibria computations in such games. Strengths: The paper impressively contributes a significant degree of originality by extending the traditional notions of regularization and partial curvature in a fresh manner to elucidate the convergence of Gradient Descent Ascent (GDA) to a Nash Equilibrium in continuous differentiable games. The authors’ approach not only offers an intriguing new perspective but also opens up a novel theoretical pathway that future work may explore further. Quality-wise, the paper exhibits a meticulous level of mathematical rigor in establishing the convergence conditions and rates. The well-structured proofs and clear theoretical arguments bolster the reliability of the results, reinforcing the significance of the paper’s main findings. In terms of clarity, the authors deserve commendation for maintaining a high standard of exposition. They have done an excellent job in the paper's organization and presentation of complex concepts in a manner that is accessible and engaging. The lucid language, combined with logically organized sections, facilitates an intuitive understanding of the problem setup, the methodological developments, and the theoretical results. Regarding significance, the findings of this paper hold substantial implications for the broader research community focused on game theory and optimization. By unveiling the factors that determine the local convergence of GDA in two-player zero-sum differentiable games, the paper paves the way for the development of more efficient and reliable methods for finding Nash equilibria. Moreover, the practical relevance of these findings is underscored by the authors' demonstration of the benefits of over-parameterization in mixed Nash equilibria computations. Overall, the paper stands out for its innovative approach, rigorous methodology, clear exposition, and high impact on the field of continuous differentiable games. It makes a valuable addition to the literature by advancing our understanding of the roles of partial curvature and regularization in the context of gradient dynamics convergence to Nash equilibria. Weaknesses: Certainly, here is the markdown version: 1. **Clarity on Mirror Prox (MP), Exponential Gradient (EG) Methods, and Overparametrization:** The paper could enhance its clarity by offering more detailed explanations and motivations for the application of Mirror Prox and Exponential Gradient methods, particularly in the context of overparametrization. Clear elaboration on how overparametrization interacts with these methods and contributes to convergence could reinforce the paper's argument. 2. **Applicability to Non-Square Games:** It would be beneficial for the authors to clarify if their results extend to non-square games, i.e., when the dimensions of the two players' strategies are different ($n \neq m$). Currently, the paper does not explicitly address this case, which leaves a gap in the comprehension of the scope of the presented results. 3. **Practical Examples of Slight Curvature Setting:** The inclusion of practical examples or real-world scenarios where the slight curvature condition holds would significantly enhance the paper's impact. Such examples can illustrate the practical relevance of the theoretical findings and provide concrete contexts in which the paper's insights can be applied. 4. **Average Eigenvalues and Convex-Concave Settings:** The paper's discussion on the role of average eigenvalues of the symmetric part of the Jacobian matrix $S$ in determining convergence rates might appear unclear to some readers. It would be advantageous if the authors could elaborate on whether these results are most relevant in non-convex non-concave settings, or in convex-concave settings that are not strongly convex-concave. This would help readers better understand the implications of the paper's results in different game settings. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Please answer my concerns at Weaknesses' section Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their appreciation of the paper. We address each concern separately. 1. **Clarity on MP, EG, and Overparametrization:** The benefit of using extrapolated gradient methods such as MP or EG for last-iterate convergence is well-known, for a general min-max optimization context [LS19, Section 3 and references therein]. For the problem of computing mixed Nash equilibria, it is shown in [WLZL21] that MP converges while its explicit analog (MDA) diverges [BP18], and in [WC22] that the method called "EG" in Section 4 converges while its explicit analog had not yet been analyzed. Therefore when writing Section 4 of the paper (on computing mixed Nash equilibria), we chose to focus directly on the methods that are known to converge. Nevertheless we agree that a reminder of this background information would be helpful. - Regarding overparametrization, our results suggest that it generically improves the local conditioning of min-max problems. Moreover, this improvement implies that it is not in fact necessary to resort to extrapolated gradient methods for convergence -- although they still have a faster rate than explicit methods. 2. **Applicability to Non-Square Games:** Our results apply to non-square games, with the exception of the convergence rate estimates when interaction dominates (Propositions 3.1-3.3 and Table 1). For those, it is straightforward to relax the assumption $n=m$ to $|n-m| \leq 1$, but not beyond that. We believe that an analogous behavior takes place in the general case $n \neq m$ (after changing the average eigenvalue of $S$ by a suitable, related, quantity) but we have not yet been able to prove it. We will add an explicit remark about this. 3. **Practical Examples of Slight Curvature Setting:** In addition to the example studied in Section 4, another interesting example is the Augmented Lagrangian method, which is commonly used for real-world constrained optimization problems. We will include a discussion of this setting in revision. 4. **Average Eigenvalues and Convex-Concave Settings:** The relation between average eigenvalues of the symmetric part of the Jacobian matrix $M=S+A$ and convergence rate, is agnostic to convexity-concavity. Indeed the relation goes via the algebraic quantity $\tilde{\mu}_M$, which determines the local convergence rate of gradient flow by a fully general dynamical system result. In fact our average-eigenvalue results of Section 3 do not require $S$ to be positive semi-definite, and so they also apply in non-convex non-concave settings. We chose not to emphasize this fact because we focus on convergence to local Nash equilibria throughout, which automatically implies local convexity-concavity, but we will add a brief remark on it. [BP18] James P. Bailey and Georgios Piliouras. “Multiplicative Weights Update in Zero-Sum Games”. In: Proceedings of the 2018 ACM Conference on Economics and Computation (2018). [LS19] Tengyuan Liang and James Stokes. “Interaction matters: A note on non-asymptotic local convergence of generative adversarial networks”. In: The 22nd International Conference on Artificial Intelligence and Statistics. PMLR. 2019, pp. 907–915. [WC22] Guillaume Wang and Lénaïc Chizat. “An Exponentially Converging Particle Method for the Mixed Nash Equilibrium of Continuous Games”. In: arXiv preprint arXiv:2211.01280 (2022). [WLZL21] Chen-Yu Wei, Chung-Wei Lee, Mengxiao Zhang, and Haipeng Luo. “Linear Last-iterate Convergence in Constrained Saddle-point Optimization”. In: International Conference on Learning Representations. 2021.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
SimFBO: Towards Simple, Flexible and Communication-efficient Federated Bilevel Learning
Accept (spotlight)
Summary: This paper focuses on bilevel optimization in a federated learning environment. Bilevel optimization has various applications in federated learning (FL) and few recent works proposed versions of bilevel optimization schemes for FL. A challenging step in bilevel optimization is the computation of the "hypergradient", and the existing schemes are able to obtain an estimate of the hypergradient, albeit a biased estimate, with a substantial communication overhead (multiple rounds of communication per server-side update). The paper reformulates the hypergradient computation as a least squares problem that can provide an unbiased estimate of the hypergradient, while requiring only a single round of communication for every server-side update. Based on this estimate, the paper presents a federated bilevel optimization algorithm, SimFBO, and a variant robust to system-level heterogeneity, ShroFBO. The theoretical analyses demonstrate that the proposed algorithms can converge with a sample complexity comparable to that of existing federated bilevel schemes, while demonstrating a significant improvement in the communication overhead. Preliminary empirical results highlight the significant communication efficiency of the proposed schemes compared to the existing baselines. Strengths: **Well-motivated intuitive presentation.** The authors do an excellent job in presenting this paper. The main problem and the challenges with the existing solutions are clearly discussed, and it is easy to see why the existing schemes are not very practical. The main idea of the use of the global least-squares reformulation, while obvious in hindsight, is very well presented and motivated, making it easy for the reader to follow along and realize how the existing challenges are mitigated. The theoretical results are clearly and intuitively presented with proper discussion highlighting the main parts of the analysis, and the main steps (with appropriate pointers to the supplement). **Critical reformulation removing both bias and communication overhead.** A key strength of this paper is a simple (yet of significant practical impact) reformulation of the hypergradient estimation using a standard quadratic program. A key property that the authors leverage is the fact that the global quadratic objective can be decomposed into per-client quadratic objectives, which is not true of the global hypergradient (which cannot be decomposed into per-client hypergradients). This simple yet powerful insight is then utilized to obtain an estimate which, upon proper solution of the global least-squares problem, is unbiased, and can be efficiently updated along side the upper and lower level variables in the bilevel problem. While this global least-squares reformulation does facilitate an intuitive communication-efficient algorithm, the paper also performs a thorough theoretical analysis, demonstrating how the inaccuracy in the hypergradient estimate plays into the convergence. The overall algorithm makes the solution of federated bilevel problems significantly more practical. **Generality of the proposed algorithmic framework.** The authors do a great job at highlighting the generality of the algorithm framework. First, the general client and server aggregation (in equation (4)) allows us to cover various different client side optimizers, and the analysis is able to provide a convergence guarantee with such generalized aggregations. Second, the proposed framework incorporates system-level heterogeneity, allowing for different clients to perform different levels of client-side updates, and making the server-side aggregation robust to such heterogeneity. This robustness is demonstrated empirically, and the robust version of the algorithm is analysed theoretically. Weaknesses: **Increased hyperparameter space.** The proposed framework utilizes various hyperparameters: - The client-side learning rates $\eta_{y/v/x}$ and iterations $\tau_i^{(t)}$ (for each server side update $t \in [T]$ and each client $i \in [n]$) - The server-side learning rates $\gamma_{y/v/x}$ - Potentially the choice of the client-side coefficients $a_i^{(t)} = \left[a_i^{(t,0)}, \ldots, a_i^{\left(t, \tau_i^{(t)}-1 \right)} \right]^\top$ (for each client $i \in [n]$), which might be potentially tied to $\alpha_\min, \alpha_\max$. - The hypergradient projection radius $r$ As per the theoretical analyses, it can be seen that the best convergence rate of any execution will critically depend on an appropriate setting of these problem-dependent hyperparameters. Since these hyperparameters often depend on quantities that cannot be efficiently estimated (such as Lipschitz constant), the practical bilevel implementations usually utilize some form of hyperparameter search. Hyperparameter optimization is known to be a hard unsolved problem in FL because of the overall communication overhead. This makes it hard to see how the proposed federated bilevel framework can live up to its practical potential -- one can view this proposed federated bilevel framework as having shifted the communication overhead from the model training stage to the hyperparameter optimization stage, without reducing the overall communication necessary for good training convergence (which involves trying various hyperparameters and training with them). Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - In lines 87-88, it seems that $\tilde{v}^*(t)$ in the definition of $\Delta_v^{(t)}$ depends on the iteration $t$. However, line 88 claims that $\tilde{v}^*$ is the minimizer of $\sum_{i=1}^n w_i R_i(x, y^*, \cdot)$, which implies that $\tilde{v}^*(t)$ is not dependent on $t$. Can this be clarified? Usually in the bilevel analysis, we are tracking quantities such as $|| y^*(x^{(t)}) - y^{(t+1)} ||$ for dependent variables, such as $y$ and $v$. So $\tilde{v}^*(t)$ can also be the minimizer of $\sum_{i=1}^n w_i R_i(x^{(t)}, y^*(x^{(t)}), \cdot)$ or even $\sum_{i=1}^n w_i R_i(x^{(t)}, y^{(t+1)}, \cdot)$. - What is $\bar{\tau}$ in Theorem 1? It seems to be introduced in the theorem statement, but I am not unable to find (in the main paper) what this $\bar{\tau}$ is supposed to signify. Is it some aggregate of the $\{ \tau_i^{(t)} \}$ across all $i \in [n]$ and $t \in [T]$? Minor: - The legend in Figure 2 (right) seems off since it has no SimFBO. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: I did not find any discussion on limitations in the main paper. However, I do not anticipate any potential negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer DZHN for the time and valuable feedback! **Q1: Hyperparameter optimization is known to be a hard unsolved problem in FL because of the overall communication overhead. This makes it hard to see how the proposed federated bilevel framework can live up to its practical potential -- one can view this proposed federated bilevel framework as having shifted the communication overhead from the model training stage to the hyperparameter optimization stage, without reducing the overall communication necessary for good training convergence (which involves trying various hyperparameters and training with them).** **A:** Thanks for pointing this out to us! In our practical implementation, we set the client-side learning rates to be the same as $\eta$ for the updates on $y,v,x$ (similarly we set the server-side learning rates to $\gamma$). In addition, we set the iterations $\tau_i^{(t)}$ to be the same as $\tau$ for all clients $i\in [n]$ and all $t\in [T]$. The coefficients $a_i^{(t)}$ are only used to show the flexibility of our framework to cover various local optimizers, and hence we simply set all $a_i^{(t,k)} = 1$ in the experiments. The purpose of radius $r$ is only to guarantee the boundedness of $v$ updates, and hence any sufficiently large constant suffices. In summary, our implementation involves only three critical hyperparameters $\eta$, $\gamma$ and $\tau$, and hence is still quite efficient in practice. **Q2: In line 187-188,** $v^*$ **is defined unclearly.** **A:** We guess you refer to lines 187-188. Sorry about the confusion. To make this clearer, we revise this sentence as follows: Let $\Delta_v^{(t)} := \mathbb{E}\\|v^{(t)} - \widetilde{v}^*(t)\\|^2$, $\widetilde{v}^*(t) = \arg\min_v \sum_{i=1}^nw_iR_i(x^{(t)}, y^*(x^{(t)}), v)$ denote the approximation error, where $\widetilde{v}^*(x^{(t)})$ is the minimizer of $\sum_{i=1}^nw_iR_i(x^{(t)}, y^*(x^{(t)}), \cdot)$. Then, it can be seen that $\widetilde{v}^*(t)$ is dependent on $t$. Please let us know if we do not understand your question correctly. **Q3: What is **$\bar{\tau}$** in Theorem 1? Is it some aggregate of the** $\tau_i^{(t)}$ **across all** $i\in[n]$ **and** $t\in [T]$**?** **A:** Yes, you are right. In the analysis, we set all $\tau_i^{(t)}$ to be $\tau_i$ independent of time $t$. Then, $\bar{\tau}$ is defined as $\bar{\tau} = \sum_i^n w_i \tau_i$, as given in line 614. We will clarify it in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for the response. My questions regarding the terms are clarified. Regarding the discussion on hyperparameters, it is good to understand that there are only effectively three hyperparameters. I have no further questions. --- Reply to Comment 1.1.1: Title: Thanks for your feedback Comment: Dear reviewer DZHN, Thank you so much for your great time and efforts in the review and rebuttal! We really appreciate your suggestions, and will definitely incorporate them in our revision. Best, Authors
Summary: This work consider the federated bilevel optimization problem. Compared to existing methods, the authors develop a new and simple method named SimFBO without subloops and requiring much fewer communication rounds at each iteration. In the setting with system-level heterogeneity like diverse local steps, they further propose a more robust version named ShroFBO that is shown with a correct convergence under such heterogeneity. Convergence analysis is developed for both methods. Empirical results have shown the great performance of the proposed methods. Strengths: 1. Federated bilevel optimization is a relatively new but challenging problem even when the lower-level objective is strongly convex. This work provides a reasonable and quite effective framework to this problem. 2. The proposed algorithms are novel, easy to understand and theoretically grounded. Providing a simple solution with strong empirical and theoretical performance is strong. 3. Technical derivations seem to be nontrivial. For example, it is interesting to deal with the challenges by client drift, boundness of local iterates etc. It seems to fill the gap of communication complexity of $1/\epsilon$ in federated bilevel problems. This is good. Weaknesses: The proposed methods require second-order derivatives, which may cause some scalability issue. Is it possible to develop fully first-order methods given the current SimFBO framework. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Another question is: In the right plot of Fig. 2, is the legend correct? It missed SimFBO. Overall, this is a strong work with a promising solution to federated bilevel optimization, so I suggest the acceptance. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: A limitation of their work is the requirement of the lower-level strong convexity in Assumption 1. It would be interesting to explore if this condition can be relaxed or eliminated, taking into account recent advancements in the field, such as those presented in [1, 2]. [1] B. Liu et al. “Bome! bilevel optimization made easy: A simple first-order approach.” NeurIPS 2022. [2] R. Liu et al. “Averaged Method of Multipliers for Bi-Level Optimization without Lower-Level Strong Convexity.” ICML 2023. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer TKNN for the time and valuable feedback! **Q1: Is it possible to develop fully first-order methods given the current SimFBO framework?** **A:** Good point! One possible idea is to approximate the Hessian- and Jacobian-vector products using the finite-difference tricks, i.e., $[\nabla_y g(x, y+\delta v)-\nabla_y g(x, y-\delta v)]/(2\delta)\approx \nabla^2_{yy} g(x, y)v$. Since this approximation error is controllable, it is possible to provide a convergence rate guarantee similar to that of SimFBO. We would like to leave it for our future study. **Q2: Another question is: In the right plot of Fig. 2, is the legend correct? It missed SimFBO.** **A:** Thanks! We will fix the legend in the revision. **Q3: A limitation of their work is the requirement of the lower-level strong convexity in Assumption 1. It would be interesting to explore if this condition can be relaxed or eliminated, taking into account recent advancements in the field, such as those presented in [1, 2].** **A:** Great suggestion! In this setting, it is critical but challenging to find the correct convergence criterion from the KKT perspective, develop feasible distributed constrained optimization-based methods, and provide the convergence analysis. We will leave this interesting exploration for future study. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal and I do not have further question for the moment.
Summary: This work studies the federated bilevel optimization, where the lower- and upper-level objectives are defined over all clients. Since the lower-level solution is the minimizer of the average of all client objectives (i.e., in a global manner), the main computational challenge is to compute the global hypergradient with second-order derivatives. This work provides a simple and effective approach named SimFBO to this problem. It contains simultaneous updates on all three variables at each client, a generalized server-level update. It has no subloops. A variant of SimFBO called as ShroFBO is also proposed to deal with the heterogeneous client computation. Theoretical convergence is provided for these two approaches. Experiments seem to show the proposed methods are much better than other baselines. Strengths: 1. The work is well motivated and easy to follow. Finding a simple but effective solution to a complicated problem in federated setting is a interesting and important topic. 2. The algorithm design is new, i.e., simultaneous local updates, aggregation, and generalized server-side operation. The simple structure will be useful in practice. But such simple structure seems to be not easy to prove in the federated setting. Specifically, it is usually challenging to guarantee the boundedness of $v$ variable. The authors show this via induction that as long as the server-side $v$ is bounded, the local $v$ is bounded as well. This seems to be new. 3. Nonasymptotic convergence is provided and seems to be significant. It allows for client sampling without replacement, linear speedup, achieves the $1/\epsilon$ communication complexity. In contrast, existing studies require clients to be selected with replacement due to the complex hypergradient construction. This work achieves the linear speedup without replacement, and as well as optimal communication complexity. These are new results in federated/distributed optimization. Weaknesses: For the experimental part, can the authors explain why FedNest and AggITD perform poorly over CNNs? Is it due to a non-proper hyperparameter tuning or something else? More details should be provided. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Does the $1/\epsilon$ communication complexity hold for both partial and full client participation? Per my reading, it only holds for the full participation case, right? I think the authors should make this clearer. 2. From the experiments, it is not clear to me how many local steps are used for the proposed methods? Also it would be great to investigate the impact of number of local steps on the final convergence rate and test accuracy. 3. Is it possible to incorporate the variance reduction or momentum-based estimators to improve the sample complexity to $O(\epsilon^{-3/2})$ under the current simple framework? It would be great to include some discussions on this. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer MEss for the time and valuable feedback! **Q1: Why FedNest and AggITD perform poorly over CNNs?** **A:** As we illustrate in Figure 1, FedNest and AggITD both contain one or more sub-loops of communication rounds in each outer iteration, which leads to a high per-iteration computational cost as well as a much slower convergence rate. In addition, we show that the generalized server-side aggregations and updates are helpful to improve communication efficiency. However, FedNest and AggITD do not have such features. We will clarify this in the revision. **Q2: Does the **$1/\epsilon$** communication complexity hold for both partial and full client participation?** **A:** Great question! The $1/\epsilon$ communication complexity holds for full client participation. We will clarify this in the revision. **Q3: From the experiments, it is not clear to me how many local steps are used for the proposed methods. Also, it would be great to investigate the impact of the number of local steps on the final convergence rate and test accuracy.** **A:** For the experiments shown in Figure 2 and 3, we set all numbers of local steps of our SimFBO method to one because we found that more local steps only improved the performance and convergence rate marginally. In particular, we found that the performance and efficiency of SimFBO were improved only slightly when increasing the number of local steps from 1 to 5, whereas further increasing the number of steps degraded the performance due to the larger client drift. We will include such a figure in the revision. **Q4: Is it possible to incorporate the variance reduction or momentum-based estimators to improve the sample complexity under the current simple framework?** **A:** Great point! As we explained in line 120, our flexible coefficients $a_i^{(t,k)}$ of local aggregations in (4) cover the optimizers such as variance-reduced and momentum-based methods, and hence it is possible to further improve the sample complexity with these two techniques. We would like to leave this interesting work for the future. --- Rebuttal Comment 1.1: Comment: Thank you for the response. It clarifies my questions.
Summary: The paper addresses the bilevel optimization problem in the federated learning context. The authors propose a novel gradient-based algorithm that effectively updates the arguments in both the inner and outer optimization problems simultaneously. Additionally, the paper extends this algorithm to handle data heterogeneity, making it more relevant for real-world scenarios. The theoretical analysis on the convergence rate of the proposed approach is also provided. While the content itself is interesting, the manuscript requires substantial proofreading and improvements before it is ready for peer review. Strengths: - The paper demonstrates a comprehensive treatment of the bilevel optimization problem in federated learning. Notably, the proposed method addresses data heterogeneity, contributing to the algorithm's practical applicability. - The inclusion of a comparison with existing methods in Table 1 is valuable. Weaknesses: - There are still rooms to improve the presentation of the paper. - The paper suffers from unclear or inconsistent mathematical notations. E.g. - In defining the objective functions, a bracket should be used otherwise the meaning is not correct. Eg. in (1), $$\min \{\Phi(x) = F(x.y^*(x))\}$$ The objective functions defined elsewhere in the manuscript has the same issue. - In (3), the notations $\zeta_i$ and $\xi_i$ are not defined. Similarly in (4). It is important to highlight they corresponds to local mini-batch dataset, as they make a difference in the proof of Lemmas in the appendix. - In assumption 1, it is better to use $\mu_g$ for the $\mu$-strongly convex constant as it is used elsewhere. - The notation $\delta_t'$ in Propostiation 2 is not defined. Though the reviewer find the definition in the appendix, the notation should be defined when it is firstly used. Similarly for other notations such as $\bar\tau$ in Theorem 1. - In Appendix $C$, Lemma $2$ is w.r.t. the function $G$ not $g$. - There are numerous typos and missing details in the appendix. E.g. - In Proof of Lemma 3, the statement "step 6 of algorithm (1)" does not match the algorithm description. - The proof of Lemma 4 missed some key steps and hence is not clear. - In the proof of Lemma 8 in L569. The first equality should have expectation on the LHS and the second equality should be $\leq$. - Presentation - The paper could benefit from providing concrete examples of bilevel optimization after (1) to underscore the problem's importance. - The main text should reference where readers can find proofs for propositions. - The description of the experimental setting is vague; a clearer explanation of why the studied problem is a bilevel optimization problem is needed. - The novelty in the proposed method is limited. - The key method in Section 2.3 is a distributed version of the centralized gradient-based bilevel optimization method. - The method for hetergoeneity is a simple generalization of FedNova. - Since the reviewer thinks the proposed method is a simple extension of existing algorithms under federated learning setting, the convergence rate analysis of the proposed algorithm is therefore a key contribution of the paper. There are several issues in technical details. - In Lemma 5, what is the definition of $\nabla R_i(x,y,v)$? Is it only the partial derivative w.r.t. $v$? Otherwise the gradient is not well defined since $g_i$ and $f_i$ are only assumed to be twice differentiable. - Due to the unclear definition of the notations and many tiny typos in the manuscript, the proof is not very easy to follow. For example, Lemma 10 is a key lemma for the proof of theorem 1, but it is not clear why the first step (24) holds since $\delta_t$ is not defined. Therefore, it is hard for the reviewer to check whether the proof is correct or not. - In the proof of the theorem 1, it is not clear to the reviewer how the projection of $v$ is reflected in the proof of the theorem. - The proof of Theorem 2, it does not make sense to the reviewer that "taking $w_i=p_i$" in L662. - It would greatly improve the quality of the paper if the summary of the results in Table 1 can be empirically demonstrated in the experiment. Reference: [1] Gradient-based Bi-level Optimization for Deep Learning: A Survey Technical Quality: 3 good Clarity: 1 poor Questions for Authors: See the questions in the above section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer 8pmc for the time and valuable feedback! **Q1: Unclear notations, typos, missing details and other presentation issues.** **A:** Sorry about the confusion and missed details. Also thanks for pointing them out for us! We will definitely follow your suggestions to improve the presentation, double-check the typos and make the notations consistent. **Q2: The novelty of the proposed method is limited. The key method in Section 2.3 is a distributed version of the centralized gradient-based bilevel optimization method.** **A:** We respectfully disagree with this point. Our distributed method in Section 2.3 contains substantial new components that do not exist in the distributed version of the centralized gradient-based bilevel method. First, we include a generalized server-side aggregation and update to guarantee convergence, improve efficiency, and deal with the harder system-level heterogeneity. Second, we include a server-side projection on $v$ updates to ensure the local $v$ boundedness (via induction) and the convergence under the client drift. These key features also facilitate the design and analysis of ShroFBO under the system-level heterogeneity. Note that all previous federated bilevel methods do not have these new features. **Q3: What is the definition of** $\nabla R_i(x,y,v)$ **? Is it only the partial derivative w.r.t.** $v$ **?** **A:** Yes, it is the partial derivative w.r.t. $v$. We will revise the notation to $\nabla_v R_i(x,y,v)$. **Q4: Lemma 10 is a key lemma for the proof of theorem 1, but it is not clear why the first step (24) holds since is not defined.** **A:** $\delta_t$ is a positive tunable parameter that is decided later in the final convergence rate analysis, as can be seen in (30). We will clarify this in the revision. **Q5: In the proof of theorem 1, it is not clear to the reviewer how the projection of** $\delta_t$ **is reflected in the proof of the theorem.** **A:** The projection of $\delta_t$ is reflected in two places of the proof. First, we use this server-side projection to guarantee the boundedness of $v^{(t)}$ for all $t$ such that we can show via induction that the local iterates $v_i^{(t,k)}$ are bounded. Second, the projection appears in characterizing the per-iteration estimation gap $\mathbb{E}\\|v^{(t+1)} - v^*(x^{(t)})\\|^2$ in $v$ updates via the following inequality: $\mathbb{E}\\|v^{(t+1)} - v^*(x^{(t)})\\|^2 = \mathbb{E} \\| \mathcal{P_r}(v^{(t)} - \rho^{(t)} \gamma_v \sum_{i \in C^{(t)}} \widetilde{w_i} h_{v,i}^{(t)}) - v^*(x^{(t)}) \\|^2 \leq \mathbb{E}||v^{(t)} - v^*(x^{(t)}) - \rho^{(t)}\gamma_v\sum_{i \in C^{(t)}}\widetilde{w_i} h_{v,i}^{(t)}||^2.$ Here, the inequality uses the non-expansive property of projection on a convex set and $v^*(x^{(t)})=\mathcal{P}_r(v^*(x^{(t)}))$ due to the boundedness of $v^*(x^{(t)})$. We will clarify this in the revision. **Q6: In the proof of Theorem 2, it does not make sense to the reviewer that "taking** $w_i = p_i$**" in L662.** **A:** Sorry for the confusion. Note that the server-side aggregations in (7) for ShroFBO replace the weights $w_i,i=1,...,n$ of the aggregations in (6) for SimFBO by $p_i,i=1,...,n$. It is equivalent to setting $w_i$ to be $p_i$ in the analysis of Theorem 1 for SimFBO. We will clarify this and directly use $p_i$ in the proof of the Theorem in the revision. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal and the clarification on the misunderstanding that I might have before. I have updated my scores based on the rebuttal. --- Reply to Comment 1.1.1: Title: Thanks so much for your updates Comment: Dear Reviewer, Thanks so much for your updates and for raising your score. We are happy that our responses clarify your questions. We will take your suggestions into our revision. Best, Authors
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Optimal testing using combined test statistics across independent studies
Accept (poster)
Summary: [Update1: During the rebuttal, I updated my score from 5 to 6. The reason is that I want to stronger weigh in that the paper is technically solid. My concern whether NeurIPS is a perfect fit for this paper remains, and I ask the AC to judge that part.] The paper studies aggregation strategies for combining test statistics obtained from independent experiments. It is intuitively clear that the optimal strategy would be to aggregate the whole *data* and then compute one test statistic on the entirety of the data. However, the authors make the assumption that from each experiment one can only use a single real number (aka test statistic) for the final analysis. The setting they consider is that they have observations of random variables $$ X^{(j)} = f + \frac{1}{\sqrt{n}} Z^{(j)}, $$ where $Z^{(j)}$ are independent $d$ dimensional normals and $n\in \mathbb{N}$ and additional parameter. The null hypothesis is $f=0$ which is tested against $\|f\|_2 \geq \rho$ for some $\rho >0$. If pooling the data was allowed, the chi-square test is optimal for this setting. They derive theoretical results that give two different regimes: - When $m \lesssim d^2$ the optimal rate can be achieved by combining the individually optimal test statistics $\|\sqrt(n) X^{(j)}\|_2^2$, which is undirected. - When $m \gtrsim d^2$ then taking the directionality into account leads to better rates. Two examples they discuss are (phrasing in my own words that the authors might adapt): - Split the observations into $d$ groups of size $\sim m/d$ and test with the $i$-th group whether the null hypothesis holds for the $i$-th dimension. Then combine those. - Choose a 1-dimensional projection of the $d$-dimensional data uniformly at random. Aggregate all observations along this projections (now these are 1-dimensional values, as required in the setting) and combine those directly to test the null hypothesis along the projected direction. (Note that this requires shared randomness between all experiments). Finally the authors demonstrate their theoretical findings with toy experiments and numerically confirm their findings. Overall I enjoyed reading the paper and was able to learn something new. Thank you to the authors. Strengths: - The paper is extremely well written and has "textbook" quality. It focuses on a simple toy problem and provides an exhaustive analysis thereof. - Combining outcomes from different experiments is certainly an important problem in statistics. - The paper seems original, however, admittedly I am unable to judge to what extent similar analyses exist in the stats literature. - The theory and experiments are in perfect accordance and complement each other. - The paper provides a good overview of existing methods to combine test statistics and discusses both $p$- and $e$-values as examples. They relate existing methods to their contribution. Weaknesses: 1. My first point concerns the significance of the approach. In the introductions the paper states > Given multiple data sets relating to the same hypothesis, one would like to combine the evidence. Sometimes, the full data sets are not available (e.g. due to privacy or proprietary reasons) or difficult to combine directly (e.g. due to the different experimental or observational setups). In such cases, the analysis must be carried out on the basis of the published results for each of the studies. - This motivation reads that each study publishes a test statistic without knowing of the others. I thus think that the second type of approaches the authors provide does not fall in this category. - The second type of approaches require that the "meta-analyser" can make some choices of what statistic the individual studies report. Hence *somewhere* all data must still be available. It is unclear why we cannot use that then. I think this needs more motivation. - I presume that the first approach of combining the undirected tests has been studied exhaustively. 2. While the paper is relevant for machine learning in general, it is in itself a very statistical paper. No learning involved etc. Hence, I am posing the question whether NeurIPS is the appropriate venue. I would see a stats journal as a better fit. 3. I think the paper could be a bit more accessible if a bit more intuition about the approaches is provided. I wrote my understanding in the summary, maybe the authors want to correct/modify this and include something like that. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Please comment my first two concerns above in the rebuttal. Note that my reservation against acceptance is based on those comments. And I am happy to increase my score after the rebuttal if I am convinced otherwise. Minor: - l 124 "where" --> "were" - l. 339 "form" --> "from" Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: This paper only studies a simple toy problem. Limitation of my review: I did not read the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the time and effort the Reviewer has put in evaluating our work. The Reviewer also raises a few concerns to which we respond below. *1 -- Significance of the approach:* We agree that in case of independent trials typically the test statistics are set independently from the number of trials. In such cases, the best attainable rate is $\frac{\sqrt{d}}{\sqrt{m}n}$. This is still novel and an important contribution however; theoretically capturing explicitly the loss in terms of error rate (i.e. the $\sqrt{m}$ worse error rate) when conducting meta-analysis in such scenarios. It is important for practitioners conducting meta-analysis to be aware of this potential loss in statistical power. In our revised manuscript we have highlighted these more prominently and provided an extended description of Theorem 4, emphasising this point. The second meta-analysis approach based on directional test statistics is designed for scenarios where individual datasets are not centrally collected, but there's coordination among experimenters. Even if one can make some choices regarding the statistics, it does not mean that all raw data must be accessible. The result shows that (when designing experiments) advance coordination can be beneficial (e.g. one could consider reporting a directional statistic, employ shared randomness with a public seed, etc.). We have revised our explanation in the article to better reflect these contributions. *2 -- The connection to learning theory:* We agree with the reviewer that our paper could be published in a (good) statistics journal as well. However, despite the statistical nature of the paper, we believe that the problem we investigate is relevant to various fields in machine learning, particularly in scenarios where data/evidence aggregation occurs. So far these areas had limited theoretical underpinning and deriving guarantees (from a statistical perspective) helps better understanding these phenomena and improves the explainability of these methods. We believe that NeurIPS provides a suitable multidisciplinary platform to reach an audience interested in these intersections between statistics and machine learning and also appreciate a theoretical oriented work. *2 -- Accessibility:* We will follow your advice to include a section that translates the technical details into an intuitive understanding. We thank the Reviewer for their take on the regimes and intuition for which testing strategies are optimal, we have adopted this intuition in the introduction and setting sections of our new version of our article. This should help readers unfamiliar with the depth of the statistical methods to grasp the key concepts. We thank the Reviewer for pointing out several of the typos, these are corrected in the new version of the paper. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: I thank the authors for their reply. I do not have any further questions. I encourage the authors to careful write out their distinction between a pure meta-analysis of, say, p-values and an analysis that allows for coordination between the trials. While the first one is clear, I believe the latter needs more motivation. Nevertheless after reading the other reviews and the rebuttals, **I will increase my score from 5 to 6.** I think the paper is a technically very solid contribution. [Note: ~It seems that currently I am unable to adjust my score. But I will do so later.~ Review is now updated] --- Reply to Comment 1.1.1: Title: reply Comment: We thank the Referee for the reply and the suggestion, we will follow it in the revision. We also appreciate that the Reviewer went through all the comments and rebuttals and we thank her/him for the additional point.
Summary: This theory paper provides a minimax lower and upper bounds for the testing risk (sum of Type-I and Type-II errors) for different combination methods in the specific setting of many normal means model. With the testing goal is to detect the presence or absence of the signal component in this normal means model, their results show several combination methods of test statistics (e.g. aggregate p-values and e-values) cannot consitently detect signals below a certain threshold that depend on the number of trials, samples and dimensions of the problem. Strengths: * The theoretical results are sound and based on several established techniques in distributed testing, assuming several assumptions hold true. These are proof to be indeed the case in the Appendix for the test statistics combination methods proposed in Section 3 of the main text. * The paper is mostly well-written. Weaknesses: 1. In general I think the phrasing of on the paper's contributions could be make clearer in the last part of the introduction section. 2. The authors should discuss more on the relevance of the setting -- the many normal means model -- in some more concrete applications. It is arguable that although this is a theoretical paper, the theories inside it is an attempt to quantify a very practical problem of meta learning. I see a lack of evidence for the popularity of many normal means in practice. 3. Slightly related but not as equally importance, but the authors should have acknowledged that a limitation of their work is that the theoretical results only hold with many normal means model assumption. 4. Experimental results could include more settings with a variety number of of sizes/dimensions (perhaps in the appendix) to support the theory. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * I have only skimmed through the proof in the appendix, so this might be my mistake, but I do not see the appearance of the $\epsilon$ term for binary approximation of the statistics in the main results. Could the authors clarify on this point? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: * See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for reviewing our manuscript and the constructive feedback. We address below the raised concerns point-by-point. *1 -- Phrasing of on the paper's contributions:* Based on your feedback, to futher improve our mansucript, we have restructured and emphasised the contributions in the last part of the introduction to clearly outline the scope of our paper and our findings. *2 -- Relevance of the setting:* The many normal means model is indeed a specific statistical model, but it is foundational to statistical theory, capturing phenomena which readily extend to more complicated statistical models, for instance nonparametric regression, classification, density estimation, just to name a few. In the revised manuscript we provide a more extensive description this model and its connection to other, practically more important statistical models, citing literature that further expand on this connection: * Large-scale inference: empirical Bayes methods for estimation, testing, and prediction - Efron (2012) * Introduction to Nonparametric Estimation - Tsybakov (2003) * Mathematical foundations of infinite-dimensional statistical models - Giné and Nickl (2015) * Deficiency distance between multinomial and multivariate normal experiments - Carter (2002) It is because of this strong connection to other statistical models that the many-normal-means model serves as a benchmark to derive error rates, even when one has more complicated statistical settings in mind, as often the error rates extend. See for example: * Information-theoretic lower bounds for distributed statistical estimation with communication constraints (https://proceedings.neurips.cc/paper_files/paper/2013/file/d6ef5f7fa914c19931a55bb262ec879c-Paper.pdf) - Zhang, Duchi, Jordan and Wainwright (2013) * Handling Sparsity via the Horseshoe (https://proceedings.mlr.press/v5/carvalho09a/carvalho09a.pdf) - Carvalho, Polson and Scott * Needles and straw in haystacks: empirical bayes estimates of possibly sparse sequences (https://arxiv.org/pdf/math/0410088.pdf) - Johstone and Silverman (2004) *3 -- Acknowledgement of limitations:* We have further emphasised in the revised version that our results concern the many-normal means model. Extension to more other, more complex models (see examples above), are left for future work. *4 -- Experimental:* We have extended the simulation settings, by expanding the appendix. The simulations now include a variety of high-dimensional settings and differing sizes. *Question concerning $\epsilon$ dependence:* The $\epsilon$ term is absorbed in the constant $c > 0$ in the main results to improve the readability. If explicit dependence on $\epsilon$ is desired in the main results, we will gladly change this in the new version. In conclusion, based on your feedback, we have identified areas where we enhanced our manuscript, we hope that the revisions will address your concerns. --- Rebuttal Comment 1.1: Title: Thank you for your rebuttal Comment: I think the author have answered all my questions thoroughly. However, I agree with one of the reviewer point that in general this work leans on more of a statistical methodology paper, therefore I still maintain my score as Weak Accept, as I do not see a major problem with it. --- Reply to Comment 1.1.1: Title: reply Comment: We are grateful to the Reviewer for their response and their thorough examination of all the comments and rebuttals. For the purpose of clarification, we would like to ask whether the final score of the reviewer is "Weak Accept" (6) or a "Borderline Accept" (5)? Thank you in advance for your reply.
Summary: Authors study a problem of optimal combination of p-values in a meta-analysis context. Specifically, they focused on characterizing the minimax separation rate for a family of "smooth-ish" combination methods that aggregate p-values (or e-values). They show that: * The family contains a lot of methods that are used in practice. * The separation rate for methods in the family is nearly optimal. * Optimal combination method depends on the problem setting (sample size, number of p-vals, dimension of the problem); they describe practical consequences. Furthermore, they explored how the rate can be improved by allowing coordination between the experiments that generate the p-values. Strengths: The paper is written in a clear and without excessive statistical jargon. For instance, the concept of separation rate is introduced and explained in a simple and intuitive way, allowing readers to understand the results presented in the paper (in contrast to the mat description found in "Non-asymptotic minimax rates of testing in signal detection"). The significance of the results is solidly established on two grounds: derivation of a separation rate and practical guidance for method selection based on n, m, and d (Section 2.2 and 2.3 add extra value). From my (limited) understanding of the literature, these results are both novel and original. Weaknesses: I am genuinely surprised that this problem has not been previously studied. While preparing the review, I came across similar/related results concerning the Family-Wise Error Rate (FWER) in the paper "Family-Wise Separation Rates for multiple testing." However, the combination methods studied are different (Holm–Bonferroni procedure type). I'd suggest a more comprehensive and thorough discussion of the existing literature on minmax testing for multiple hypothesis testing is added to the paper. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See above Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the time and effort spent on evaluating our paper, the positive feedback and the insightful suggestions. Below, we address the specific suggestions raised by the Reviewer. *Novelty of the results:* Yes, indeed, we were also surprised by the lack of theory for meta-analysis in hypothesis testing with a composite alternative hypothesis. Actually, this was the main motivation of our research, to fill this gap in the literature. *Connection to multiple testing:* We agree with the reviewer that multiple testing has lot of similarities to the concerned meta analysis problem. The common goal is to aggregate somehow the outcome of multiple (often independently executed) tests. However, there are also major differences. In the meta-analysis framework (e.g. when combining p-values) one considers a given hypothesis and executes multiple experiments on deciding its validity. Then the outcomes of these tests are combined for a more accurate decision about the common hypothesis of interest. In contrast to this in multiple testing in each experiments (possibly) different hypotheses are considered and the goal is to decide which ones should be rejected. Here the main challenge is to obtain uniform guarantees over all hypotheses (e.g. by controlling the FWER or FDR). Following the suggestion of the Referee we discuss then connection of these related but different testing problems. --- Rebuttal Comment 1.1: Comment: I'm confused by this comment. Bonferroni would do just fine achieving a goal of combining p-values, Bonferroni combination function is not smooth and does not fall into your framework. Your answer does not give me great confidence that you actually reviews multiple hypothesis testing thoroughly. Please formalize the difference between multiple testing and meta analysis problem. Temporarily moving down to 5. --- Reply to Comment 1.1.1: Comment: We are sorry to hear that our response caused confusion. We provide below a detailed response on the difference between multiple testing and meta-analysis, in addition to explaining that our framework actually includes Bonferonni's method. **On Bonferonni's method:** Bonferonni's method would entail combining p-values as $m \cdot \min \{p^{(1)},\dots,p^{(m)} \}$ (see e.g. display (1) in [VOVK, V., AND WANG, R. Combining p-values via averaging - Biometrika 107]). This is in fact covered by our framework, on page 7 we discuss generalized averages as defined for example in [VOVK, V., AND WANG, R. Combining p-values via averaging - Biometrika 107]. This framework also encompasses the Bonferonni correction, which is a generalized average (with $a_{r,m} = m$ in the notation of page 7 of our paper). We would also like to highlight that our framework is more general than just smooth combination functions, as Assumption 3 concerns only Hölder continuity (which is satisfied for Tippett's method or a Bonferonni correction as well). We did not highlight this method as it is more conservative than e.g. Tippett's method ($ \min \{p^{(1)},\dots,p^{(m)} \}$), and is often considered when the p-values combined might have dependencies because they e.g. concern the same data, which is the case in multiple testing, see the definition below. **Comparing the mulitple testing to meta-analysis:** *Multiple testing*: Let $X$ be some data drawn from an unknown distribution $P$. Based on the true distribution $P$, a hypothesis $H$ is either true or false; which we denote by $H$ is being true if it belongs to a set $\mathcal{T}_0$ and false if it belongs to $\mathcal{T}_1$ otherwise. Given a collection of such hypotheses $\mathcal{H}$, using the data $X$ one tries do discern $$H \in \mathcal{T}_0 \text{ versus } H \in \mathcal{T}_1$$ for all $H \in \mathcal{H}$ simultaneously. This very general definition of multiple testing, see for example [FROMONT ET AL. Family-Wise Separation Rates for multiple testing - Ann. Statist. 44(6)] Prototypical examples are testing for each gene in a sequence separately whether the gene plays a role in a given disease, or testing the returns of different portfolios, for finding which portfolios have higher than market returns. For example, a multiple testing problem in the context of the many-normal-means model considered in our paper is $$H_{0k}: f_k = 0 \text{ versus } H_{1k}: |f_k| \geq \rho_k$$ for $k=1,\dots,k$, given data $X = f + \frac{1}{\sqrt{n}}Z$, $f \in \mathbb{R}^d$. For such a collection of hypotheses, one tries to discerns multiple, different hypotheses on the basis of the same data. Standard approaches in multiple testing are: Bonferonni correction and Holmes method (to control the family-wise error rate) and Benjamini-Hochberg (controlling the false discorvery rate). *Meta-analysis* can be performed when there are multiple scientific studies addressing the *same question* (see e.g. [Hedges et. al - Introduction to meta-analysis] or Wikipedia). In our analysis, we consider testing, where $m$ studies address the *same hypothesis* and the goal is to combine the study outcomes (e.g. their reported p-values). Prototypical examples would be multiple experiments conducted to establish whether a given drug has *any* effect (e.g. whether a given blood pressure medication indeed lowers the bloodpressure), or multiple studies concerning the question whether smoking causes cancer. In our setting, we only consider the same hypothesis (i.e. $\mathcal{H}$ is a singleton) in each study: $$H_{0}: f = 0 \text{ versus } H_{1}: \|f\| > \rho. $$ *In conclusion:* Although a Bonferonni correction falls within our framework, it is unnecessarily conservative as we do not use the same data for testing more than one hypothesis. Therefore we did not explicitly mention it, as e.g. Tippett's method is more appropriate for our setting. Nevertheless, in our updated version we explicitly refer to this method as well as an example of a generalized average. We hope that our definitions above highlighting the conceptual differences of meta-analysis and multiple testing are satisfactory.
Summary: The paper addresses the problem of combining test statistics from multiple independent studies in the context of null-hypothesis significance testing. The authors derive a mathematical framework to quantify the cost of compressing multiple independent trials of a study into one real-valued test statistics, and they derive minimax lower and matching upper bounds for the testing errors. The many normal means models is used as toy example. Strengths: The paper addresses the problem of combining the results of multiple empirical studies towards one common hypothesis, an important problem in meta-analysis. Weaknesses: This submission seems to be out of the scope of NeurIPS. While the authors draw a connection between meta-analysis as used in statistics and meta-learning as used in the context of machine learning, this connection is not clarified further. For the rest of the paper, the authors seem to focus on the problem of meta-analysis. It was challenging to read to paper as it lacks clarity in the introduction. To improve clarity, I suggest to start the introduction by clearly stating the problem that will be addressed in the paper and clearly introducing the terms used in the text. For example, in line 23, the authors introduce “meta-analysis”, (probably) referring to the technique of combining the results of multiple scientific empirical studies and then set it equal to “meta-learning”, referring to the machine learning technique of improving a learning algorithm to perform well multiple tasks. I did not go through the technical parts of the paper in all detail. However, there are several statements in the paper that appear to be misleading, e.g., - line 68: “… includes many standard meta-learning techniques, for instance the standard p-value combination methods […]”. I am not aware of p-value combination being a standard technique in meta-learning. - line 313: “Common examples of e-values are Bayes factors and likelihood ratios.” e-values and Bayes factors are closely related, but Bayes factors or likelihood ratios are not e-values, see https://arxiv.org/abs/1912.06116 Appendix A for clarifications. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can you clarify to connection between "meta-analysis" and "meta-learning" (meaning the approach reviewed in your reference number [14]? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors do not discuss limitations or potential negative impact of their work. Although being a more technical paper, I think it would have been appropriate to discuss the limitations of the meta-analysis approach as a whole. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for the effort of evaluating our paper and we are happy to hear that the Reviewer shares the opinion that the problem is important to study. The Reviewer does not specifically comment on the soundness of the mathematical framework, but highlights potential misleading use of terminologies and a lack of clarity in the introduction, both of which we address below. *Connection to ML:* We have taken the advice of the Reviewer to heart and improved the clarity of the introduction by providing an expanded and more clear explanation of the problem. We have also expanded the introduction to more explicitly describe the connection and relevance to more modern machine learning applications. *Meta learning vs meta analysis:* In our analysis we used meta-learning / meta-analysis techniques as statistical and/or machine learning techniques distilling information across studies and different data sets to form a more informed inference. We are sorry that using these terms basically interchangeably in our manuscript caused confusion. In the literature we did not find a standard interpretation of these terms (as is also discussed e.g. in the corresponding wikipedia page) and found that it depends on the specific community how they are interpreted. However, to avoid confusion, we have decided to abstain from using the term meta-learning and stick to meta-analysis throughout the text. *Likelihood ratio as e-value:* Whilst it is true that in situations where the null hypothesis is composite, a Bayes factor or likelihood ratio is not necessarily an e-value, we are considering a setting in which the null hypothesis is simple. Any Bayes factor given by a likelihood ratio of a mixture distribution $P_\pi := \int P_f d\pi(f)$ and the measure under the null hypothesis forms an e-value: $E := \frac{dP_\pi}{dP_0}$ satisifies $E \geq 0$ and $P_0 E = 1$. We have expanded on the statement: ``Bayes factors and likelihood ratios form common examples of e-values.'' to clarify that this should not be read in largest generality, but in the context of hypothesis tests such as considered in this article. We have added additional discussion concerning the limitations of the meta-analysis approach as a whole. We would like to thank the Reviewer again for their consideration. --- Rebuttal 2: Title: Is Meta-Learning vs. Meta-Analysis clarified? Comment: Dear reviewer colleague MHU2, in order to get a final judgment of the paper, I would like to understand whether your concerns have been addressed by the rebuttal or not? I share your definition of "Meta-Learning" and also think that at least within the NeurIPS community using "meta-analysis" is much more appropriate. But the authors promised to change that and I do not see any further problem with it. Do you? If it was clarified, then frankly I feel that a low score of 3 with a (quite high) confidence of 3 is not adequately reflected in your review. I think there is about a day left to ask further questions to the authors. I would like to be able to read their answer, if there are any questions still. Thank you!
Rebuttal 1: Rebuttal: First of all we would like to thank the Reviewers for carefully reading our paper and their interest in our work. We are happy to hear that the majority of the reviewers found our paper "well presented" (aj9v), "well-written and organised" (qxf1), written in a clear and without excessive statistical jargon (vLRc), "extremely well written" and has "textbook" quality" (ThPF). We also thank that the reviewers found our work "original" (ThPF, vLRc) with "sound theoretical results" (GjrH) "up to small typos, correct mathematical proofs" (aj9v) and that "The theory and experiments are in perfect accordance and complement each other." The referees have also raised a few concerns and provided several suggestions which we have addressed point-by-point in the individual rebuttals. Here we collect the main changes in the manuscript. * We have extended the discussion of our results and numerical analysis following the suggestions of the referees (see the comments below for details). We have also corrected the typos pointed out by the referees. * We have clarified the use of misinterpretable jargons to further improve the readability. * We have extended the simulation study, in the new experiment we use substantially more repetitions resulting in a more accurate picture with negligible randomness visible. * We have expanded on the relevance of our work to the field of machine learning. * We have added references to existing literature on multiple testing where error rate is of concern. * We provide a more extensive description the model and its connection to other, practically more important statistical models, whilst also emphasising the limitations of this model. * We added more intuitive explanation as well as interpretation of which method / regime applies in practice, depending on the experiment design. We would like to thank all the Reviewers for their consideration. Pdf: /pdf/6d34bea098ae1b0a4af542892193b2b6d2818f92.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper lies in the context of meta-analysis of multidimensional models. Usually, meta-analyses are performed by combining p-values or e-values. In both cases, the statistical power is not well-known. The authors provide a constrained framework of many means normal model. Based on this framework, they derive lower (theorem 1) and upper (theorem 2, for "rate optimal methods") bounds for testing methods. Theorem 3 takes advantage of possible shared randomness between trials, especially when the dimension is small relative to the number of trials considered. The authors then compare the performances of rate optimal methods (described in section 2.1), chi-square test on pooled data and single trial approach on simulated datasets. Strengths: The paper is well-written and organised. It provides a good overview of state-of-the-art combination techniques for meta-analyses and identifies the lack of knowledge about their relative power. The paper derives bounds for the testing errors by introducing a principled mathematical framework based on multidimensional models, where a loss in power is expected. It also gives insights into rate optimal combination methods and the effects of sharing randomness between trials. The simulation study provides some results on comparing combinations methods for meta-analysis, which is not properly addressed in the current literature. About the supplementary material: Proofs as well as an R script to reproduce the simulation study are provided. Weaknesses: The main weakness of this paper is the clarity of the mathematical developments. The framework implies several assumptions that could be explained more. The 13-page-long supplementary material provides proof of the theorems but is sometimes hard to follow. In the theorems and their proofs, sometimes arbitrary values are chosen (for $\alpha$ and $\beta$ notably) but it seems to make the reasoning more confusing. Also, when running the R script, the following error is returned: Error: object 'dat_long' not found Technical Quality: 3 good Clarity: 3 good Questions for Authors: - $\mathbb{E}_0$ is first used in line 141 but not introduced beforehand. Is it the expectation under the null hypothesis? - I understand that Assumption 1 aims at restricting the values of S. Is it possible to give a small interpretation of the assumption? - Theorem 3 indicates that "there exists a constant $C_\beta$" but the formula indicates "$C_{\alpha,\beta}$". Is it a typo error? - In theorems 1 and 3, arbitrary values are used for $\alpha$ and $\beta$, not in theorem 2. Why make the choice of using these values and not giving general results for the corresponding intervals of validity? - A similar remark on the proofs, for example in proof A.1. Why use $\kappa_{1/10}$ and $\kappa_{1/8}$? - It might also be more comprehensible to explicitly add the results taken from the literature and used in the proofs. - The provided R script needs to be reviewed. When running it, I get the following error is returned: "Error: object 'dat_long' not found" I understand that the chosen mathematical framework provides lower and upper bounds for testing errors. The paper describes some meta-learning techniques that attain these bounds. I am not sure how the simulation study demonstrates the theoretical results. Is it by comparing these meta-learning techniques to the "chi-squared pooled" approach and the single trial approach? The indicator of performance for "optimal testing" is the ROC curve. Would it be interesting to consider other criteria, such as sensitivity, specificity, precision, or F-score? Overall, I encourage the authors to add more explanations and interpretations, especially in the mathematical development part. Note that the 9-page limit does not include references. It might also be worth submitting the paper to another journal where the format might be more adequate. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have delimited the framework of their contribution by providing constraints inherent to their model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for the thorough analysis and thoughtful feedback, we appreciate the time and effort. We address below the questions and comments of the Referee point-by-point: * Thank you for pointing this out, we have clarified in the new version of the paper that $\mathbb{E}_0$ corresponds to the expectation of $\mathbb{P}_0$, i.e. the probability distribution corresponding to the null hypothesis. * Assumption 1 requires that the combined test statistics can be "rescaled'' appropriately. This assumption works in conjunction with Assumption 2, as the degree of Hölder continuity of the combination function depends on the ``scale'' (or rather, the effective support) of the underlying test statistics. We provide a more detailed explanation of the conditions in the new version of the article. * The $\alpha$ subscript is indeed a typo, which we fixed in the new version. * First we have provided an interval of validity for $\alpha$ and $\beta$. However, we felt that working with a fixed, reasonable number will increase the readability. Following the comments of the Referee we have changed this back and provide a validity interval. * Similarly, for the quantities depending on $\alpha$ or $\beta$ (e.g. $\kappa_{\alpha}$) we have considered a fixed value $\alpha$ to improve readability. But as above, they can be given on an interval as well, if requested. * Following the Referee's suggestion, in the new version, to increase the self containedness and improve the readability, we have added the frequently refereed lemmas and theorem to the appendix. * We apologise for the error encountered while running the R script. It is caused by a syntax/undefined object related error by a line that is meant to exemplify. The part of the code that runs the simulation (from the "#SIMULATION" comment downwards) should run without error. * We have extended the explanation of the simulation study and described its connection and relevance to the theoretical results in more detail. Indeed, the first message is that meta learning is substantially better than using just one experiment. The second is that combing $\chi^2$ statistics is substantially worse than computing the $\chi^2$ test using all the data. Finally, we show that depending on the interplay of $m$ and $d$ either the standard $\chi^2$ combination or the novel directional test statistics method is better. * We note, that the ROC curve indicates the trade-off between sensitivity and specificity for each of the tests, hence describe the precision of the test. * Following the suggestion of the Referee, we have extended the simulation section, see the attached figures in the general rebuttal. --- Rebuttal Comment 1.1: Comment: I have read the response of the authors and thank them for considering my remarks and those of my fellow reviewers. As for the R script, when I run the code from the "#SIMULATION" comment downwards as suggested by the authors, I still get the same error. Overall, I wish to maintain my score of 6.
Summary: The paper considers methods to aggregate test statistics from different, independant, sources, in order to construct an aggregated test with hopefully more power. The key contribution of the paper is the study of the minimal treatment effect which can be detected in a standard gaussian noise setting, for which they obtain minimax rates. These rates exhibit an elbow effect when the number of aggregated statistics (m) is close to the square of the dimension of the signal (d^2), which the authors relate to the use of the signal direction in the test (when $m < d^2$, the standard chi2 test would give near optimal power, while for $m > d^2$, the test statistics must encode directional information if optimal power is to be obtained). These rates bring two insights: First, aggregating one dimensional statistics in a multi dimensional setting comes at a price. Second, there is no single optimal aggregating method. Strengths: Overall, the paper is well presented and obtains conclusive results in the scope considered, in the form of minimax rates. These rates justify previous empirical insights on aggregated testing strategies, notably the need for different aggregating strategies depending on the number of tests and the dimension of the problem. Methods achieving the rates (up to a log factor) are specified. As far as I could assess, the mathematical proofs are, up to small typos (see weakness), correct. The presentation of the main results in section 2 can be easily followed (minimax rates in the general case, optimal combination methods then improved minimax rate using coordination between tests). Weaknesses: The proof in the appendix suffers from some small typos. Notably, I believe that in equation (S.1), the $2\epsilon$ term should be $\epsilon$ (or $\epsilon< \frac{1}{2}\left(\kappa_{1/10} - \kappa_{1/8}\right)$ in the definition of $\epsilon$), while in line 538, the conclusion of Markov's inequality is that $D^c$, not $D$, has mass less than $1/64$. The methodology used to obtain Figure 1. could be improved. Notably, the Roc Curves for Chi-square combined and Chi-square pooled should not exhibit any randomness, since these two curves can be computed in closed form using the cumulative distribution functions of the chi2 square and non central chi square. If numerical approximations are to be used, it could be possible to obtain curves exhibiting much less noise by increasing the number of repeats and recycling them for all FPR (I could obtain curves exhibiting little to no noise robustly using 10 000 repeats and 100 FDR in less than <2s on my personal computer, so computation time is not an issue). Moreover, the way the $f_i$ are drawn, using Rademacher random variables, might have an impact on the directional methods. While this might or might not be the case, I would suggest recomputing the curve, drawing a random f uniformly on the sphere. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could the methodology be extended beyond the current setting? Notably, is there any natural generalisation in the case where the noise level $\sigma = 1/\sqrt{n}$ can no longer be assumed to be identical ? Is there any explanation of the results in terms of the distribution of the p-values under the alternative hypothesis? If so, is there any insight on the best way to aggregate a given set of statistics (instead of considering the best way to aggregate the best statistics for a given m, d)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper derives optimal test aggregation strategies in the context of gaussian noised signals. A first limitation of the paper is that it is assumed that the sample size $n$ considered in each collected test is identical. This can barely be expected in practice, and as such, insight about the impact of uneven tests would be welcome (i.e., does $mn$ translate into $\sum_{i=1}^m n_i$?, or rather $m \min(n_i)$?). This issue is not mentionned in the paper. Another limitation not mentionned is the fact that, in practice, the test statistics obtained from independent trials are not chosen, but set, and as such, Stouffer's method, which attain the minimax rates when $m>d^2$, is not implementable. In most settings where it could be implemented, the whole $X_i$ information would be known, and therefore the dimension reduction issue would not occur. For this reason, the best rate achievable with realistic one dimensional statistics is of particular interest. Unfortunatly, this is left to the appendix, in Theorem 4. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our sincere thanks to the Reviewer for the taking the time and effort to thoroughly review, the insightful comments and constructive feedback on our paper. The Reviewer also identifies areas for improvement, which we will address point-by-point below. *Typos:* Thank you for pointing out these typos, we have corrected them in the revised manuscript. *Simulation study:* we have rerun the simulations with a much larger number of repeats (i.e. 100 000 repeats) and considered a higher dimensional setting. Furthermore, following the suggestion of the Referee, we have drawn the parameter $f$ uniformly from the sphere. See the attached figures in the general rebuttal. We note that we did not observe a significant difference compared to the Rademacher prior, but we if it is deemed beneficial we can include this in the simulations added to the appendix of the new version of the article. *Optimal test statistic:* We agree that in case of independent trials typically the test statistics are set independently from the number of trials. Hence in such cases the direction based approach can not be applied and hence the best attainable rate is $\frac{\sqrt{d}}{mn}$ following Theorem 4. We completely agree with the Reviewer, that this result is of particular interest in such settings. In our revised manuscript we have highlighted these more prominently, stressing the importance of theoretically capturing explicitly the loss in terms of error rate (i.e. the $\sqrt{m}$ worse error rate) when conducting meta-analysis in such scenarios. It is important for practitioners conducting meta-analysis to be aware of this potential loss in statistical power. and provided an extended description of Theorem 4. Nevertheless, one could also consider situations where the number of trials is known in advance or a new round of experiments are executed in addition to some available earlier ones. In such cases a better rate is attainable by choosing the non-standard, directional test statistics is designed for scenarios where individual datasets are not centrally collected, but there's coordination among experimenters (depending on the interplay between m and d). *Concerning the heterogeneity and different sample sizes:* In the revised paper have included a discussion concerning extensions in this direction. We note that in case the sample sizes are of the same order, i.e. $n_j \asymp n_k$ for all $k,j=1,\dots,m$, the current rates can be proven with minor modifications. If the local sample sizes can be substantially different then the question is more complicated. The rate $m {\min}_j n_j$ can be of course easily obtained, but it is rather pessimistic, e.g. if $n_1\gg\underset{j=2}{\overset{m}{\sum}} n_j$ then almost no extra information comes from the experiments $j=2,...,m$ and the rate is determined by the first sample size $n_1$, which is substantially better than $m {\min}_j n_j$. However, the rigorous analysis of this question goes beyond the scope of our paper. *Question concerning insight on how to aggregate:* This is a great question, and something that was not highlighted in the old version of the paper. The insight is mainly in the fact that in terms of error rate, many methods perform optimally in principle, as long as the underlying p-value / test statistics have decent power. That is, Fisher's method, Stouffer's or simply averaging all can reach the $\frac{\sqrt{d}}{\sqrt{m}n}$ optimal rate. Beyond that, simulation of the behaviour p-values / test statistics for specific alternatives that are especially deemed important to detect is also an option. We have added a remark reflecting this to the new version of the paper. --- Rebuttal Comment 1.1: Title: Answer to rebuttal Comment: I have read the authors' response and thank them for taking the remarks into consideration. For the simulation study: Thank you for rerunning the simulations, the new plots are cleaner and easier to interpret. The authors have thoroughly answered my concern about the potential bias due to the Rademacher prior. Concerning the heterogeneity and different sample sizes: Including a discussion in the revised paper is sufficient, and I agree with the authors that a rigorous analysis of the setting where $m \min_j n_j \ll \sum_j n_j$ is beyond the scope of the present paper. All in all, the authors' answers were satisfactory, as they cleared up a potential weakness and added discussions on potential extensions of their work. The paper is technically solid and well presented. Overall, I wish to maintain my score to 6.
null
null
null
null
Large Language Models Are Semi-Parametric Reinforcement Learning Agents
Accept (poster)
Summary: The paper proposes an LLM + RL architecture for text-based domains. The key idea is to run Q-learning on the side and augment LLM's prompt with Q values for available actions. The evaluation is done one WikiHow and WebShop. Acknowledging the rebuttal, I'm satisfied with authors responses and happy to increase the score. Strengths: The paper's core idea---adding q-values to LLM prompts---is interesting. Weaknesses: Clarity is the main issue with the paper. It is hard to read and hard to understand. Here are some of the issues that stand out: - In section 3.2 the Q-learning is introduced. It is not clear how exactly it is performed. Is function approximation used? If so then what is the network architecture. Is it tabular? If so then how come we are encountering exactly the same observations in the test set? - On line 163 the term "flattening" is used. This is not a common term in the literature, what is meant by it exactly? the eq.4 shgiws bootstrapping for Q-value estimation. Is that what is meant? - There is a substantial amount of heuristics that is used to make the system work, but they are not well explained and their influence on the results is not well evaluated. For example, line 227-232 describes categorisation of the web pages into categories. Is this used for q learning? What happens if this categorisation is not used? Same with 243-249 for wikihow. - It is not clear whether the difference to the baselines is significant. For example on WebShop ReAct has avg score of 66.6 (from original paper) and the proposed method has 68. Is that a significant difference? Success rate is 40 for ReAct in the original paper, but is reported 36 in this paper. Where does the difference come from? Since the proposed method achieves the success rate of 38 this warrants a clarification. - Positioning in the introduction is adding to the confusion. The conversation of about using the external memory is misleading, as what is later proposed is Q-learning augmentation. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Would authors please address the points outlined in the weaknesses section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The paper does not have limitations section Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your kind review. **About Q-Learning and $Q$ function architecture**: The $Q$ function is implemented with the experience memory as a lookup table. Q-Learning is applied to the experience-memory-based lookup. Note that, we don't use this lookup to predict a $Q$ value for the new observation during inference. All the records in the lookup come from the training set and serve as candidate exemplars in the input prompt to LLM. During the test, it is the LLM that will speculate a $Q$ value for new observations and distinguish good actions from the bad ones. Here is a naive example: Experience Memory: | Task | Observ. | Act. | $Q$ | |-------|---------|-------|-------| | $g_0$ | $o_0$ | $a_0$ | $q_0$ | | $g_1$ | $o_1$ | $a_1$ | $q_1$ | Assume the new observation (& task) encountered during test is $(g, o)$. And several similar experiences will be retrieved from the memory, *e.g.*, $(g_0, o_0)$ and $(g_1, o_1)$. Then the LLM will be fed: $$ g_0, o_0: a_0 \rightarrow q_0 $$ $$ g_1, o_1: a_1 \rightarrow q_1 $$ $$ g, o: $$ The LLM is prompted to predict actions in format $a \rightarrow q$ for $(g, o)$. **About "flattening"**: Eqn. 4 does show what is "flattening", which means we flatten/expand the preceding $n$ steps in the Bellman equation. We find that this is usually called $n$-step Q-Learning or $n$-step bootstrapping [1, 2] and we will update the expression in later revision accordingly. **About categorization in Sec. 4.1**: The categorization of pages in WebShop and instructions in WikiHow is used to calculate the similarity function introduced in Sec. 3.3. The similarity function itself is not related with Q-Learning procedure, but is used to select relative exemplars in the input prompt to LLM for in-context learning. We will improve the corresponding explanation and make it more clear in later revision. **About ReAct results**: The reported ReAct results in this paper are obtained with *GPT LLM text-davinci-003* on 100 test tasks, while the results reported in the original paper of ReAct are obtained with *PaLM LLM* on 500 test tasks. That's why the success rate reported in our Tab. 3 is **44**, while the success rate in the original paper is 40. The success rate 36 in Tab. 1 is an **average** result across three different exemplars (the result 44 takes exact the same exemplar with the original paper of ReAct). You can also refer to the average result 36 in "Avg" column in Tab. 3. Instead of significance of improvement, we mainly claim the superior *robustness to the initial exemplars* compared to ReAct, as shown by the results across different exemplars in Tab. 3. It may take additional human labors and budget costs to find an optimal exemplar to guarantee the performance of the raw ReAct. In contrast, Rememberer holds a more stable performance across different initialization and can reduce the required human labors to construct appropriate exemplars. **Relation between Q-Learning and experience memory**: For this question, we refer you to global reply 1. We use RL to assist LLM in exploiting the interaction experience, and the experience memory serves as *pivot* between RL algorithm and LLM. In practice, the experience memory is regarded as a tabular $Q$ function and is updated by Q-Learning. **About limitations**: Thanks for your kind reminder. We will add the section of Limitations in later revision. * [1] Watkins, Christopher John Cornish Hellaby. Learning from delayed rewards. PhD thesis, University of Cambridge, England, 1989. * [2] Peng, Jing and Williams, Ronald J. Incremental multi-step q-learning. Machine Learning 1996. --- Rebuttal Comment 1.1: Title: Looking forward to your reply Comment: Hello. The author-reviewer discussion period is going to end. We wonder if our rebuttal solves your concerns. If there remains questions, we are willing to conduct further and deeper discussion with you.
Summary: The authors introduced Reinforcement Learning with Experience Memory (RLEM) to update the memory of the LLM agent, enabling it to evolve its capability without fine-tuning the parameters of the LLM. Extensive experiments were conducted on two RL task sets to evaluate the proposed framework. The experimental results demonstrate the superiority and robustness of Rememberer. Strengths: 1. The interesting perspective of exploration is a meaningful attempt to combine LLM with RL. 2. The paper is well-written, easy to understand, and the illustrations are exquisite. 3. The ample experiments demonstrate the effectiveness of the proposed method. Weaknesses: The discussion and introduction about whether to use RL are not sufficient. It is unclear why RL must be used instead of other methods. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Why use RL instead of other methods? 2. If there are alternative methods to RL, should they be supplemented with comparisons in the experimental section? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Refer to weaknesses and questions Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your kind concern. First, we want to further clarify the necessity of updating the $Q$ values in the experience memory. Compared to the traditional ICL (in-context learning) methods with labeled dynamic exemplars, there will be both good and bad experiences in the memory of Rememberer. Hence, *it is necessary to discriminate between good and bad experiences through the learned $Q$ values*. Given this premise, reinforcement learning is the most straightforward method to update the recorded $Q$ values in POMDP (partially-observable Markov decision process) problems. And we didn't work out other alternatives. If your concern is about other LLM-based agents utilizing memory or experience, we are glad to have a quick comparison for you (Please note that most of these methods are released after the submission deadline of NeurIPS, thus, we were unable to include the comparison into our paper). Voyager [1] proposes to use codes as actions, and designs a skill library to store the successful programs as skills. GITM [2] stores the past successful action sequences in a text memory to assist an LLM planner in future planning without further discriminating their values. Then the stored experiences are summarized by LLM to gain deeper insights into planning policy. ChatDB [3] leverages relational database to track states in a dynamic process. In contrast, we store the experiences in a structured memory and learns their $Q$ values to filters out the more valuable actions. In this way, we can combine RL with LLM and design a semi-parametric RL agent. We are glad to include these comparisons in Related Work in later revision. As it will take some certain efforts to migrate these methods to the test benches in this paper, we may not introduce further new results during rebuttal. Instead, we can draw a number of valuable inspirations from them and plan to include several new insights in our future work. * [1] Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, Anima Anandkumar. Voyager: An Open-Ended Embodied Agent with Large Language Models. arXiv:2305.16291. * [2] Xizhou Zhu, Yuntao Chen, Hao Tian, Chenxin Tao, Weijie Su, Chenyu Yang, Gao Huang, Bin Li, Lewei Lu, Xiaogang Wang, Yu Qiao, Zhaoxiang Zhang, Jifeng Dai. Ghost in the Minecraft: Generally Capable Agents for Open-World Environments via Large Language Models with Text-based Knowledge and Memory. arXiv:2305.17144. * [3] Chenxu Hu, Jie Fu, Chenzhuang Du, Simian Luo, Junbo Zhao, Hang Zhao. ChatDB: Augmenting LLMs with Databases as Their Symbolic Memory. arXiv:2306.03901. --- Rebuttal Comment 1.1: Title: Keep Score Comment: Thanks for your reply, but the current reply still doesn't address my concerns, so I'm currently choosing to keep the score. --- Reply to Comment 1.1.1: Comment: Hello. Thanks for your reply. It is a pity that we didn't address your concerns. Maybe you can have further explanations for your questions, so that we can give a more specific reply.
Summary: This paper proposes a framework to combine RL w/ LLM using an offline Q-learning setting. An experience memory component is proposed to store past experience for estimating Q values. Evaluation on WikiHow and WebShop demonstrate the effectiveness of the proposed method and framework. Strengths: The paper proposes an interesting idea of combing LLM w/ RL. This is a relatively new field. The evaluation results on 2 tasks are strong. Weaknesses: This work seems very empirical. One question I have is how to generalize this method to other tasks. It will be good to see some theoretical analysis. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. It might be better to not use OpenAI logo to represent LLM in the figures. 2. In Equation (1), how is max calculated? 3. It is not clear how the reward is defined and can be obtained. 4. There are a lot of RL issues people have overcome over years, such as over estimation of Q values, sparse reward, using replay buffer to provide iid data. It seems the proposed method will have all these issues. How did this paper tackle them? How do we guarantee that this method will converge? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: It seems the proposed method of storing memory is not very scalable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable review and advice. **Generalizability**: For this question, we refer you to global reply 2. **$\max$ in Eqn. 1**: This $\max$ is calculated from the actions already recorded for $(g, o_{t+1})$, as we cannot traverse all the possible actions when there are free-form languages. We will clarify this in later revision. Actually, in the two tested task sets, the $Q$ value of the unrecorded actions can be deemed to have a default value 0. Such a "default" value may have other better choices in different task domains. **How reward is defined and obtained**: The reward is defined by the underlying task sets and obtained from the environments during interaction. As for WebShop, a score will be computed at the end as the returned reward according to the relevance between the reference product and the agent-bought product. As regards WikiHow, the agent is instructed to navigate several pages and a reward will be given when the agent manages to reach one of the target pages. The details about the reward definition should be referred to in the original paper of the task sets. **Classic RL issues**: This paper mainly focuses on how to improve and evolve LLM's capability with RL algorithm, but not on the issues of RL itself. Therefore, we only implement a basic Q-Learning method, ignoring a few issues in advanced RL domains. The methods aiming at solving these RL issues (*e.g.*, Double Q-Learning [1], replay buffer [2], and diverse exploration methods) are orthogonal with this work, and will be straightforward to be embedded into Rememberer. Besides, there are a few statements w.r.t. the specific concerns. * Over-estimation. As shown in Tab. 7 in the paper, over-estimation doesn't constitute a serious problem for the two tested task sets. Double Q-Learning is usually adopted for lookup-based $Q$ estimators to ameliorate over-estimation by iteratively updating two $Q$ estimators. However, such an iterative updating method may require more update steps to ensure an accurate enough estimation than current Rememberer. If time permits, we will try to supplement several results obtained with Double Q-Learning. * Sparse reward. On the two tested task sets, the strong base capability of LLM guarantees trajectories that won't deviate far from the optimal policy. Thus, sparsity of reward will not constitute a serious issue. * Convergence. We cannot guarantee mathematical convergence of Q-Learning on the experience memory, as it is difficult for lots of seriously bad observations and actions to be accessed owing to the strong base capability of LLM, *i.e.*, insufficient traversal of observation & action spaces. However, the experience memory is trained based on the strong capability of LLM. Therefore, the training process shouldn't deviate far from the optimum. Or in other word, the seriously bad states have already been implicitly traversed in the pretraining stage of LLM, thus, convergence shouldn't be a crucial issue. On the other hand, the experience memory serves as an augmentation to a capable foundation model, thus, the mathematical convergence is not as critical as for an RL model from scratch. The current version can already help the LLM well. **Scalability of the method to store experiences**: The powerful basic capability of LLM makes a large-scale experience memory unnecessary. Meanwhile, the limited number of exemplars in the prompt will lead to a performance saturation when the size of experience memory increases continuously. We refer you to the experiments in Sec. 4 in the supplementary for a deeper insight into this argument. Besides, if there is truly a need for large-scale experience memory, more scalable methods to store and retrieve the experiences (*e.g.*, vector database implemented with FAISS) can be surely adopted. The concrete implementation of experience memory is not coupled with Rememberer framework. **About the figure**: Thanks for your kind reminder. We will update our figure in later revision. * [1] Hado van Hasselt. Double Q-learning. NeurIPS 2010. * [2] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin A. Riedmiller, Andreas Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, Demis Hassabis. Human-level control through deep reinforcement learning. Nature 2015. --- Rebuttal Comment 1.1: Title: Supplementary result about over-estimation Comment: Hello, we implemented Double Q-Learning and would like to supplement several results here. **Estimation Error on WebShop** | Setting | Epoch | Estimation | Real Training Reward | Abs Error | Relative Error | |:-----------|:-----:|:----------:|:--------------------:|:---------:|:--------------:| | Full Model | 3 | 0.86 | 0.84 | 0.02 | 2.38 | | +DoubleQL | 3 | 0.71 | 0.75 | 0.04 | 5.33 | | +DoubleQL | 6 | 0.69 | 0.77 | 0.08 | 10.39 | **Estimation Error on WikiHow** | Setting | Epoch | Estimation | Real Training Reward | Abs Error | Relative Error | |:-----------|:-----:|:----------:|:--------------------:|:---------:|:--------------:| | Full Model | 3 | 2.48 | 2.60 | 0.12 | 4.62 | | +DoubleQL | 3 | 2.47 | 2.90 | 0.43 | 14.83 | | +DoubleQL | 6 | 2.70 | 2.90 | 0.20 | 6.90 | As shown by the results, iterative update of Double Q-Learning fails to ameliorate the estimation error in *few-step* training. Over-estimation is truly suppressed, however, serious under-estimation may be introduced. [1] * [1] Hado van Hasselt. Double Q-learning. NeurIPS 2010. --- Reply to Comment 1.1.1: Title: Looking forward to your reply Comment: Hello. The author-reviewer discussion period is going to end. We wonder if our rebuttal solves your concerns. If there remains questions, we are willing to conduct further and deeper discussion with you.
Summary: This paper introduces an interesting approach that harnesses the capabilities of large language models (LMs) to tackle reinforcement learning (RL) problems. The method involves estimating Q-functions using RL algorithms and providing advice to the LMs about actions with high and low Q-values. The expectation is that LMs will utilize their "remembering" abilities to effectively solve the problem. Empirical evaluation on two language-based reinforcement learning tasks demonstrate the enhanced performance achieved by the proposed method. Strengths: 1. The idea of combining a fixed large language model and a reinforcement learning component estimating action-value functions by presenting experience and advice in prompts is, as far as the reviewer is concerned, interesting. 2. The paper exhibits a well-structured organization, featuring clear and coherent writing that is easily comprehensible. Additionally, the experiments provide compelling evidence of the advantages offered by the proposed method. Weaknesses: 1. An essential assumption underlying the proposed method is that large language models (LMs) possess the capacity to "reason" by effectively recalling all instances of success and failure. Hence, it is crucial to engage in a discussion regarding the limitations of this assumption to establish a comprehensive understanding. Questions arise about the case where the observation space is extensive, and if LMs implicitly learn certain types of Q-functions. While the proposed method is simple yet seemingly effective, delving deeper into the understanding of simple methods often yields intriguing insights. 2. The ablation studies are not sufficient, which would provide valuable insights and address specific questions. 3. The applicability of the proposed method to tasks with observations in natural language is evident. However, it remains unclear how the LMs can be adapted to tackle problems involving image or vector-based observations. Further elucidation on this aspect is necessary to ensure a comprehensive understanding of the proposed method's adaptability and versatility across different observation types. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Please respond to the questions raised in the weakness section. 2. What is the architecture of the Q function and what is the reinforcement learning algorithm for learning the Q function? 3. An ablation study where the LLM does not have the access to the discouraged actions is expected. 4. Another critical issue, unrelated to the academic content of the paper, requires attention. It has been brought to attention that the usage of OpenAI's brand in figures, if without proper permission, is highly inappropriate and may constitute a violation. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors didn't discuss the limitations of the proposed method. The conclusion section summarizes the contribution of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable review and advice. **About extensive observation space**: Extensive observation space may result in a much larger experience memory, which may require more scalable and more efficient approaches to store the experiences. Meanwhile, we refer you to the experiments in Sec. 4 in the supplementary. As shown in the results, the performance will improve along with increment of the number of experiences and there is a saturation, which may be due to the limited number of exemplars. But in a more diverse observation space, the saturation may come later. Besides, experience merging, observation shrinking (like in Synapse [1]), and forgetting mechanism (like in MemoryBank [2]) should help to control the memory size and exploit the experiences more effectively. These perspectives will be studied in our future work. **About if LLM implicitly learns the certain $Q$ function**: We checked the Q values predicted by LLM during the test phase on WebShop. The average absolute error is 0.417 (reward in WebShop is between 0 and 1), which indicates that *the LLM doesn't really learn the certain $Q$ function*. Nevertheless, as stated in global reply 1, the LLM is not expected to predict the accurate $Q$ value. Instead, we use the experience memory to implement a lookup-based $Q$ function and use RL to learn it. As for the LLM, *it is just expected to speculate which action is better (to be encouraged) and which one is worse (to be discouraged), as well as which action is the best among multiple encouraged actions*. Maybe our introduction to the "output format" in Sec. 3.2 confuse you. We will refine our expression in later revision. **Ablation study where LLM has no access to the discouraged actions**: We supplemented experiments on WikiHow and the results are depicted below. | Setting | Avg Reward | Success Rate | |:----------------|:---------:|:-------------:| | Full Model | 2.63 | 0.93 | | w/o Discouraged | 2.48 | 0.81 | After removing the discouraged actions in the prompt, the performance seriously degrades on WikiHow. We inspected the learned experience memories and the exemplars chosen by the agents. It is found that sometimes there are no proper actions to encourage in the retrieved experience. *In such cases, the discouraged actions will help the LLM to avoid several wrong attempts. If the discouraged actions are then not accessible, the LLM will receive no valuable guidance from the experience.* This will lead to a poorer performance. Another interesting discovery is that ablation model are more willing to imitate or repeat the actions given in the action history in the aforementioned circumstances rather than randomly explore. This mechanism is not clear to us by now. We will update these new phenomena and perspectives in later revision. **Adaptability and versatility across different observation types**: We refer you to global reply 2 for this question. **Architecture and learning algorithm for $Q$ function**: $Q$ function is implemented with the experience memory as a lookup-based function. We implement a basic Q-Learning algorithm to learn the $Q$ function. **About the figure and limitations**: Thanks for your kind reminder. We will update our figure and add Limitation section in later revision. * [1] Longtao Zheng, Rundong Wang, Bo An. Synapse: Leveraging Few-Shot Exemplars for Human-Level Computer Control. arXiv:2306.07863. * [2] Wanjun Zhong, Lianghong Guo, Qiqi Gao, He Ye, Yanlin Wang. MemoryBank: Enhancing Large Language Models with Long-Term Memory. arXiv:2305.10250. --- Rebuttal Comment 1.1: Title: Thanks for you response Comment: The author's response has solved most of my concerns. Thanks a lot. It is interesting to see that the LLM does not learn a accurate Q function but a reasonable preference order. It makes the reviewer curious about what will happen in continuous action spaces as future work. The reviewer is still concerned that the proposed method can not be easily extended to problems with extensive observation spaces. Ratings have been increased accordingly. Remember to update your figures! --- Reply to Comment 1.1.1: Comment: Thanks for your kind advice! It is also intriguing for us to make further investigation into extensive observation space and continuous action space in our future work. And we will update our figure and article according to your advice.
Rebuttal 1: Rebuttal: Thanks to all the reviewers for kind review. We collect all the opinions and give a reply to several common concerns here. 1. **Role of Q-Learning (and what the $Q$ function is)**: We'd like to further clarify our motivation and the role of Q-Learning in our proposed Rememberer approach. Our motivation is to make LLM utilize its experience and evolve its capability. A simple experience memory cannot label the good or bad experiences, thus, we introduce RL to learn the $Q$ values for the experiences, so that the good and bad experiences can be discrimianted and the memory-augmented system can evolve its ability. In this system, Q-Learning is not directly performed onto LLM. Instead, it is the *experience memory* which RL is applied to so as to avoid directly updating LLM parameters. *I.e.*, the experience memory constitutes a lookup-based $Q$ function. Actually, We don't expect that LLM can accurately predict the $Q$ value. *LLM is just expected to be capable of discriminating between better and worse actions* with the help of exemplars from the experience memory. 2. **Generalizability**: The introduced RLEM framework and Rememberer approach are not intrinsically constrained to text domain. The working domain mainly depends on the underlying LLM, while Rememberer can work with any available LLM per se. *By replacing with a multimodal LLM like GPT-4 or Flamingo, Rememberer can also deal with multimodal observations. Besides, even a pure text LLM still has an opportunity to handle other modalities.* *E.g.*, some work like Socratic Models [1] leverages captioning model to enable a text LLM to handle visual inputs. In Voyager [2], the complex observation from Minecraft is summarized into texts. In this paper, WebShop and WikiHow are also multimodal tasks in fact. The GUIs from them are represented in a text format (See Sec. 1 in supplementary). For demonstration, we are conducting a simple experiment with Atari environment. The results may not come up in time, however, we will at least describe our plan to represent an Atari observation in text format in later replies. * [1] Andy Zeng, Maria Attarian, Brian Ichter, Krzysztof Choromanski, Adrian Wong, Stefan Welker, Federico Tombari, Aveek Purohit, Michael S. Ryoo, Vikas Sindhwani, Johnny Lee, Vincent Vanhoucke, Pete Florence. Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language. ICLR 2023. * [2] Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, Anima Anandkumar. Voyager: An Open-Ended Embodied Agent with Large Language Models. arXiv:2305.16291. Pdf: /pdf/57c52a58799849ce39cf07d1026b02e93f656bc7.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes REMEMBERER, a novel framework for Large Language Models (LLMs) that employs a persistent experience memory and a Reinforcement Learning with Experience Memory (RLEM) mechanism. This setup aims to enable LLMs to learn from previous interaction experiences in decision-making tasks, improving their policies. The experience memory acts as an external repository, storing experiences from past episodes, which the LLM can exploit without the need for fine-tuning its parameters. The framework, tested on two recent Reinforcement Learning (RL) task sets, demonstrated an improvement in performance over previous state-of-the-art (SOTA) models. Strengths: The paper's primary strength lies in its innovative approach to RL using LLMs. The authors introduce an agent framework, REMEMBERER, that overcomes the typical limitations of existing approaches, such as the significant cost of fine-tuning LLMs. The proposal of RLEM, which updates the experience memory through RL training, enables the system to evolve its abilities in an efficient manner. Moreover, the empirical results validate the effectiveness of REMEMBERER, with it outperforming the SOTA models on two RL tasks. This paper is in line with a large number of recent papers that show that LLMs can be used to create agents. Weaknesses: While the results are encouraging, the paper doesn't delve deep enough into the impact of different configurations of the REMEMBERER system. It would be beneficial to explore how varying the size and management of the experience memory as well as the similarity functions affects system performance. In addition, the paper focuses on improvement against SOTA models but doesn't provide a comparative analysis against other similar approaches that utilize memory or experience in RL, such as the very recent Voyager paper (which is too recent to compare against, but an example of these types of papers). A weakness of this approach is that it requires using a reward function, when reward functions might be unavailable or hard to create in difficult computer tasks. In comparison to other RL algorithms, the approach seems constrained to text domains. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What is x in the subscript O_x in the methods section? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: In the title, I think it should be “Large Language Models are Semi-Parametric Reinforcement Learning Agents” “Such an agent turns to be a semi-parametric system that can evolve through RL process” -> “Such an agent is a semi-parametric system that can evolve through an RL process” Overall, I think the paper can be cleaned up a lot with better English and notation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable review and advice. **Impact of memory size**: In practice, we didn't limit the capacity of the experience memory, hence it can accommodate as many experiences as the hardware memory allows. To have a perspective on the impact of actual memory size, we refer you to the experiments in Sec. 4 in the supplementary. Fig. 3 in the supplementary and Fig. 1 in the rebuttal PDF demonstrate the number of new experiences and total experiences in the memory after each training epoch, respectively. The results in Fig. 2 from the supplementary shows that the performance improves while the memory size increases, and there is a saturation. This saturation may be attributed to the limited number of active exemplars in the input prompt. **Impact of similarity function**: We didn't include analysis of similarity function, as we do not regard this component as the core of the proposed Rememberer framework. We mainly focus on building a general framework assisting LLM in exploiting its experience. The core is the experience memory and RL-based updating approach. The similarity function, in contrast, depends on the specific task domain. However, we agree that the selection of similarity function may conduct obvious impact on the final performance. We add ablation study according to your advice on WikiHow. The results are shown below. |Setting|Avg Reward|Success Rate| |:--|:-:|:-:| |Full Model|2.63|0.93| |w/o Task Sim. $f_g$|2.63|0.94| |w/o Obsrv. Sim. $f_o$|2.47|0.87| It is noticed that there are not significant difference between performances of the full model and the ablation model without task similarity, while the performance degrades if *observation similarity* is removed. This may indicate that on these tasks, GPT model benefits more from experiences that have similar observations rather than similar instruction patterns. (The task similarity for WikiHow is implemented according to the instruction pattern and may be too coarse to distinguish different experiences.) In the results, we find that without observation similarity, Rememberer fails to adjust the retrieved experiences according to the specific observation. *E.g.*, similar instructions to access an article may be emitted when the agent is on a search result page or a category detail page. The actions to be taken are different according to underlying pages the agent is on. In such cases, inappropriate experiences cannot bring valuable enough guidance, which leads to a poorer performance. As a conclusion, it is worth seeking more appropriate and more effective similarity functions for specific task domain. We will update our paper according to the new results and perspectives. **Comparison with other LLM-based agents utilizing memory or experience**: We have also noticed that a group of related work has been released **after** the submission of NeurIPS (*e.g.*, Voyager [1] and GITM [2] at May 25, and ChatDB [3] at Jun 7). Voyager proposes to use codes as actions, and designs a skill library to store the successful programs as skills. GITM stores the past successful action sequences in a text memory to assist an LLM planner in future planning without further discriminating their values. Then the stored experiences are summarized by LLM to gain deeper insights into planning policy. ChatDB leverages relational database to track states in a dynamic process. In contrast, we store the experiences in a structured memory and learns their $Q$ values to filters out the more valuable actions. In this way, we can combine RL with LLM and design a semi-parametric RL agent. Meanwhile, some work dedicated to long-term or knowledge-augmented conversational tasks also combines LLM with memory (*e.g.*, MemoryBank [4], and Ret-LLM [5]). Here, the external memory is usually used to extend the context length the LLM can perceive. We are glad to include these comparisons in Related Work in future revision. As the methods like Voyager and GITM are designed for the specific open-world environment MineDojo, it will take some effort to migrate them to our test benches and will be difficult for us to introduce new experimental results during rebuttal. Nonetheless, we find a number of valuable inspirations from these work, *e.g.*, automatic curriculum & code as action from Voyager, and automatic goal decomposition & experience summarization from GITM. We are working to benefit from these ideas in our future work. **Generalizability**: We refer you to global reply 2 for this question. **Symbol $O_x$**: $x$ denotes the index set of the experiences retrieved from the memory. We use symbol $x$ because the index set depends on the certain state met during interaction and cannot be determined in advance and is deemed "unknown". We will update the notations and supplement explanations in later revision. **About advice in "Limitations"**: Thanks for your kind opinion. We will update the title and expressions in later revision according to your advice. * [1] Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, Anima Anandkumar. Voyager: An Open-Ended Embodied Agent with Large Language Models. arXiv:2305.16291. * [2] Xizhou Zhu, Yuntao Chen, Hao Tian, Chenxin Tao, Weijie Su, Chenyu Yang, Gao Huang, Bin Li, Lewei Lu, Xiaogang Wang, Yu Qiao, Zhaoxiang Zhang, Jifeng Dai. Ghost in the Minecraft: Generally Capable Agents for Open-World Environments via Large Language Models with Text-based Knowledge and Memory. arXiv:2305.17144. * [3] Chenxu Hu, Jie Fu, Chenzhuang Du, Simian Luo, Junbo Zhao, Hang Zhao. ChatDB: Augmenting LLMs with Databases as Their Symbolic Memory. arXiv:2306.03901. * [4] Wanjun Zhong, Lianghong Guo, Qiqi Gao, He Ye, Yanlin Wang. MemoryBank: Enhancing Large Language Models with Long-Term Memory. arXiv:2305.10250. * [5] Ali Modarressi, Ayyoob Imani, Mohsen Fayyaz, Hinrich Schütze. RET-LLM: Towards a General Read-Write Memory for Large Language Models. arXiv:2023.14322. --- Rebuttal Comment 1.1: Title: Reviewer Response Comment: Thank you for your clarifications, revisions, and ablations. After reading your rebuttal and the other reviews I will keep my score as-is for now.
null
null
null
null
null
null
Continuous Parametric Optical Flow
Accept (poster)
Summary: The submission 6736, entitled "Continuous Parametric Optical Flow," presents a novel multi-frame optical flow strategy that expresses the flow continuously. This is in contrast to conventional flow strategies, which encode this displacement in a discretized manner. This result is made possible by regressing the B-spline parameters of the pixel displacement. The approach shares similarities with RAFT and PIPs, in that this motion is refined iteratively via a convolutional-GRU, and the network can accommodate a series of successive images as input (more than 2). An extensive series of experiments underline the approach's effectiveness against existing state-of-the-art in optical flow and keypoint tracking. Additionally, novel datasets specifically dedicated to the task at hand have been created. Strengths: * The idea of estimating continuous optical flow from a neural network seems relatively novel. * The performance reported in the manuscript is very competitive. Weaknesses: * While the overall editorial quality of the paper is acceptable, it still contains a large number of typos, and certain parts of the paper would benefit from additional proofreading. The flow of the paper sometimes lacks smoothness and transitions. * The literature review is rather short and fails to position the paper with respect to other approaches. It gives the impression that the paper is poorly motivated. It would be helpful to underline more clearly the differences with other techniques and why this approach is substantially better. * Similar to the previous comment, it would be interesting to list what downstream applications could benefit from this technique (3D? Tracking? Stabilization?). * One of my main gripes regarding this manuscript is the lack of scalability. Due to its ability to input multiple frames, the network requires a significant amount of memory. This limits its applicability and raises the question of fairness when compared with less memory-intensive techniques. * Another important point to address is the lack of clear contribution. In the reviewer's opinion, the main novelty of this paper is the type of output as B-splines, while the rest of the work is strongly inspired by other techniques. * What are the scalability and clear limitations of the technique? How does it perform under very small or very large motions? It would be interesting to investigate such specific use cases. * In Table 3, "Neural-ODE" is written, but in the method section, it is written as "Basic" while all other techniques are using ODE. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I have expressed most of my questions in the previous section of this review. Despite all the shortcomings mentioned earlier, I found this work new and interesting, and therefore, I would like to issue a relatively positive opinion regarding its acceptance. However, the reviewer would like to specify that he has not been working on such a topic lately and has low confidence. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See previous sections. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *1.Typos and Fluency:* - **Response:** Thanks for the valuable suggestion. We will fix all the typos and proofread the manuscript as suggested. *** *2.Motivation & Related Work:* - **Response:** - Thanks for the suggestion. In the paper, first, as agreed by all the reviewers, our novel concept of *continuous optical flow* describes **dense and continuous pixel motion displacement** regarded as an extension of classic optical flow dependent on adjacent frames. Inspired by RAFT's iterative optimization, we use similar techniques to construct cost volume and set the proper decoder for our task but introduce a continuous encoder and fuse explicit parametric curve. Notably, the step-by-step method based on chained flow is inevitable to generate drift and fails through occlusion. **In summary, our flow can generate all continuous correspondence at once with better accuracy after occlusion.** - Secondly, as for the closest vision task, generic point tracking also achieve long-term fine-grained matching for frame-to-frame mapping. With large-scale synthetic training and multi-frame aggregation, these methods like PIPs achieve better generalization ability and robustness. However, there are two drawbacks in current techniques: the discrete correspondence far from continuous displacements inherently and sparse tracking hard to infer in parallel for all pixels at once. **In summary, our method could generate dense and continuous point tracking more efficently**. - Thirdly, as for the continuous modeling, we investigate other motion estimation methods. In the video interpolation field, there are some classic motion assumptions like linear, quadratic, and cubic assumptions. For the balance of flexibility and accuracy, we choose B-spline with multiple adjusted control points as our regression objective which is rare in previous papers. **Utilizing a parametric but more flexible representation is also more suitable for continuous motion modeling.** - To conclude, continuous parametric optical flow is proposed to describe *spatially-dense and temporally-continuous pixel motion* with continuous encoded features and flexible parametric curve. **The current techniques are greatly limited for this task.** *** *3.Downstream Applications:* - **Response:** Thanks for the comment. Our method could provide spatial-temporal long-term correspondences for 3D vision tasks such as non-rigid structure-from-motion[1] and simultaneous localization and mapping [2]. Another widespread downstream application is video analysis of dynamic scenes like arbitrary frame-rate video interpolation [3] and human keypoint tracking [4]. For the promotion of this field with more real motion data,[5] recently releases a novel large-scale dataset with high-quality annotation. *** *4.Scalability:* - **Response:** Thanks for the comment. On the scalability issue of our method, we think that multi-frame inputs require more memory than iteratively-update methods based on classic optical flow. But our method could support longer frame inputs with a slight increase in memory because we randomly select a fixed number of features to guide the model. Compared to those methods with less memory, our method will show better performance on accuracy and run time. Table Comparison of Runtime and Memory Cost with Differnet Inputs | Input Frames | Runtime/ms | Memory/MB | |:------------:|:----------:|:---------:| | 4 | 274.6 | 2442 | | 8 | 280.3 | 2456 | | 12 | 304.3 | 2482 | | 16 | 307.4 | 2503 | *** *5.Contribution:* - **Response:** Thanks for the comment. First, the novel concept of *continuous optical flow* is proposed to describe dense and continuous pixel motion with an explicit parametric curve B-spline. Secondly, The combination of neural optimization and neural ODEs (ODE-ConvGRU) for optical flow computation also makes novelty as agreed by the reviewers with different usage strategy and objective from existing video interpolation work. Thirdly, the field of continuous flow is almost blank, hence, the performance of algorithms needs a special evaluation system for nonsampling fitting and generalization. According to our investigation, it is also a valuable contribution to the training on an independent simulation dataset. *** *6.Limitations:* - **Response:** Thanks for the constructive comments. In terms of the scalability of our method, as mentioned in the above response, we need multi-frames as inputs with a slight increase in memory but obtain better performance for occlusion and inference speed. We think the limitation in our work is about the parametric model because strong motion prior will potentially obstacle learning for out-of-distribution generalization and long-term prediction because of the complexity of real motion. As requested by the reviewer, here we provide some special cases like small or large motions in Figure 1 in the attached PDF. *** *7.Ablation:* - **Response:** Thanks for the comment. In ablation studies, the module ``Neural-ODE'' means we validate the function of this module. Hence,''Basic + 6spline'' denotes the setting that we use basic CNNs without the Neural-ODE module for evaluation while the full version with the Neural-ODE module is termed as ''All''. For other modules, the Neural-ODE encoder is used as the same setting to guarantee fair comparison. *** [1] Sidhu V, Tretschk E, Golyanik V, et al. Neural dense non-rigid structure from motion with latent space constraints. [2] Fu Q, Yu H, Wang X, et al. Fast ORB-SLAM without keypoint descriptors. [3] Park S, Kim K, Lee J, et al. Vid-ode: Continuous-time video generation with neural ordinary differential equation. [4] Kreiss S, Bertoni L, Alahi A. Openpifpaf: Composite fields for semantic keypoint detection and spatio-temporal association. [5] Zheng Y, Harley A W, Shen B, et al. PointOdyssey: A Large-Scale Synthetic Dataset for Long-Term Point Tracking. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for all their clarifications and additional experiments they have conducted to address my concerns. Especially regarding limitation and scalability. I have also looked into other reviewer's comments and it comfort me in my first positive opinion regarding the acceptance of this paper for this conference. --- Reply to Comment 1.1.1: Comment: **Response** We would like to thank the reviewer for the positive and insightful comments. Regarding the limitations and scalability, we are pleased to hear that the response is helpful in addressing the reviewer's concerns and look forward to receiving the acceptance notification for this conference.
Summary: This paper proposed a continuous parametric optical flow estimation algorithm with B-spline temporal trajectory representation and ODE-ConvGRU based feature extraction. Experiment has been done on both synthetic and real-world dataset and the proposed continuous parametric method performs better than traditional flow-based and point-tracking based methods. Strengths: The idea of estimating continuous parametric flow is interesting and the achieved experimental result is promising. The presentation is clear and easy to follow. The ablation study clearly shows the benefits of B-spline representation and ODE-ConvGRU feature extraction. Weaknesses: 1. It is somewhat unclear how to specify the flexibility of the trajectories, as in Table 3, different number of control points may affect the performance significantly, and in real scenarios, different objects in a scene may have different degree of motion complexities, it is not clear whether a single choice of B-spline number can handle this? 2. In table 2, the 'TRMSE' lines in 'PIPS' and 'ODE-6spline' under Vid-DAVIS dataset is exactly the same, this is problematic since their 'ADE_All' metrics are different, why? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. How about the computational complexity / running speed of the proposed algorithm? how about the memory cost? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: I donot see any potential negative societal impact Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weakness** *** *1.Flexibility of Parametric Curve:* - **Comment:** It is somewhat unclear how to specify the flexibility of the trajectories, as in Table 3, different numbers of control points may affect the performance significantly, and in real scenarios, different objects in a scene may have different degrees of motion complexities, it is not clear whether a single choice of B-spline number can handle this? - **Response:** Thanks for the valuable comment. We agree that real-world pixel motion could be rather complex. Consequently, for polynomial curves controlled by the only parameter *degree*, the flexibility will be limited. In this paper, we choose B-spline as our fitting model because the shape of this curve is additionally dependent on adjustable control points regressed by our end-to-end framework. By adjusting certain control points, the local segment of the curve will be flexibly changed. Thus, for the splines, the degree only decides global complexity and the recovery of motion detail is achieved by locally optimization from multiple control points. Due to this partition optimization, **B-spline enables higher flexibility in handling complex motions**. Even though flexible spline curves like B-spline or Bezier curve could achieve improved ability in handling complex motions, we still have to admit the limitation in *all parametric models*, where a strong explicit constraint potentially leads to better performance on those more similar motion scenery with the training set. *** *2.Data Error:* - **Comment:** In table 2, the 'TRMSE' lines in 'PIPS' and 'ODE-6spline' under the Vid-DAVIS dataset are exactly the same, this is problematic since their 'ADE\_All' metrics are different, why? - **Response:** Thanks for pointing out this issue. We are sorry that we unexpectedly copy erroneous TRMSE results from raw data, displaying exactly the same performance in PIPs and ODE-6spline. Here we show the corrected tables as below. As the table reported, the TRMSE metric of ODE-6spline in fact outperform PIPs and RAFT in all inference range of the TAP-DAVIS benchmark and appear similar trend in other benchmarks. Table. Updated TRMSE metric of ODE-6spline in Vid-DAVIS benchmark | Method | Metric | Benchmark | 20f | 24f | 28f | 32f | |:------------------:|:------:|:---------:|:---------:|:---------:|:---------:|:---------:| | ODE-6spline (Ours) | TRMSE | Vid-DAVIS | **12.96** | **15.76** | **18.12** | **20.27** | *** **Questions** *** *1.Computational Complexity & Memory Cost:* - **Comment:** How about the computational complexity / running speed of the proposed algorithm? how about the memory cost? - **Response1:** Thanks for the valuable comment. As requested, we provide the running speed and computational complexity below. Due to the samples in all benchmarks being 256x256, we use Vid-DAVIS to test the average runtime in the 20-frame inference range and memory cost for our method and all baselines. - **Response2:** Our method performs **more efficiently with shorter inference time** to generate spatially-dense and temporally-continuous pixel trajectories rather than baselines because an independent parametric curve for every pixel could directly provide all corresponding at once instead of multi-step iteration along temporal sequences based on original optical flow or along spatial coordinates based on long-term point tracker with sparse query points. - **Response3:** For faster inference, our proposed algorithm relies on multi-frame inputs and multi-time cost volume, resulting in a controllable increase in memory compared to traditional two-frame flow and robustness for tracking in parallel instead of significant expansion in memroy caused by single point tracker. Table. Comparison of Runtime and Memory Cost on Vid-DAVIS | Method | Runtime/ms | Memory/MB | |:------:|:----------:|:---------:| | Ours | 274.6 | 2442 | | RAFT | 338.2 | 1902 | | PIPs | 9073.7 | 18042 | --- Rebuttal Comment 1.1: Comment: Thanks the authors for the detailed answers to my questions. All my concerns have been addressed and I don't have any further questions. --- Reply to Comment 1.1.1: Comment: We deeply appreciate the reviewer' valuable time and efforts in providing insightful comments on our submission. We are happy to hear that our detailed rebuttal has resolved all the questions.
Summary: This paper suggests a new model to pixel wise compute temporally continuous optical flow by using B-splines. The input to the neural model are sequences of images, the output are N 2D control points for each pixel of the spline model. During training, the input is sampled from the dataset at varying time instants and by varying number of samples. The model architecture builds on known design patterns such as CNN feature encoding, iterative optimisation (search) by using a recurrent GRU network, by using correlation pyramid to compute motion features. New elements are the ODE modelling of the continuous trajectories and the overall decoding (regression) of the control points for each pixel. Strengths: The paper introduces a model able to continuously capture motion in time. Applications like video editing could profit of such capabilities. The combination of neural optimisation and neural ODEs for optical flow computation seems novel to me. Weaknesses: Some insecurity in the argumentation, e.g. line 28: "real motion follows ... principles" - is there some motion of interest that does not follow physical principles? The related work should be structured by the motion model, e.g. non-parametric motion, parametric motion. GMA is an attempt to overcome the limitations of non-parametric motion which is not clear by reading the text. The line of work called tracking (started with Lucas-Kanade) is non-parametric but sparse estimation of the apparent motion whereas particle video (Sand, Teller) tried to find a model in between dense optical flow and KLT. line 160: xhi is not shown in Figure 2 line 166: computing correlation between features at reference time and target time is limited, e.g. think of in plane rotations. There are typos in the text, e.g. line 254 line 233, Eq. 10: I do not understand "sampled point trajectories". Why is N_s needed in the metrics? The spline is also continuous in the image, not only in time, but the ground truth available in the datasets is discretly sampled, so how do you match an estimate F(t_k) to F* in Eq. 10? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: The model infers for all pixels in the reference image a 2NxWxH tensor with the control points of the pixel's B-splines. How do you infer the flow vector of all the pixels in the successive images (2nd, 3rd, ...) in the sequence? Might trajectories of different pixels in the reference (1st) image collapse to a single pixel in some image in the sequence? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: A first assessment of the impact has not been given by the authors. Even fundamental work on motion analysis & tracking should project potential impacts on foreseen applications and uses or policies such as the UN SDGs. Focus on computer vision as a field and try to explain how your research might affect the field in the next years. Try then to explain how the interaction of computer vision with society might be affected by your research, e.g. a seamlessly tracking combined with generative models might in future it even harder to distinguish fake video from the real. Give real world examples why the B-spline model is limiting, why you need parametric motion and how you believe this dilemma could be overcome in future. The B-spline motion is smooth and introduces motion consistency which is in some cases a disadvantage, e.g. when it comes to abrupt motion cases. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weakness** *** *1.Motion Claim:* - **Response:** Thanks for the comment. We agree that all the real-world motions follow classic physical principles. Thus, it is important to incorporate physical principles based on explicit constraints rather than directly using a neural network to regress point trajectories. Solely neural network based solutions may potentially cause abrupt changes that do not follow real-world physical principles. *** *2.Related Work:* - **Response:** Thanks for the valuable suggestion. Firstly, in this paper, we mainly highlight continuous pixel motion and propose the novel concept *Continuous optical flow* to describe and utilize the combination of parametric curve and implicit continuous feature to achieve flow estimation. Hence, the parametric or non-parametric modelling is just one part of the complete framework. - In terms of the structure of related work, we divide into three subsets according to the relations with the concept, vision application using recent techniques and parametric motion assumptions. Every part is connected with different aspect. - Besides, we do not state that GMA (Optical flow estimation) intend to overcome limitations of non-parametric motion and only cite it as an example of iterative update pipeline. For the tracking part, we intend to state some long-term point trackers could achieve stable frame-to-frame mapping with multi-frame inputs (exactly the whole sequence) and inference whole trajectory at once, which is similar with the effect of our work. As for the Particle Video, we intend to cite it as the inspiration of PIPs. *** *3.Figure Improvement:* - **Response:** Thanks for the comment. As suggested, we have updated Figure 2 as in the attached PDF. *** *4.Correlation Limitation:* - **Response:** Thanks for the comment. For in-plane rotations, correlation between features at reference time and target time based on matching framework can describe well without the influence of perspective effect. Thus, we do not believe there is problem in computing correlation there. *** *5.Typos:* - **Response:** Thanks for pointing out the typos. We will fix all the typos in the revised version. *** *6.TRMSE Explanation:* - **Response:** Thanks for pointing out the issue. Eq.~(10) aims to measure the trajectory smoothness by using the TRMSE metric. $N_s$ is the number of all valid pixel trajectories (sampled point trajectories) in spatial dimension. For every pixel, the algorithm will infer a continuous trajectory. In this metric, we expect to obtain a spatially-mean value through this parameter. *** *7.Contiguity Explanation:* - **Response:** Thanks for the comment. According to classic definition, optical flow is naturally dependent on 2D pixel coordinates. The parametric continuous curve in our work is only used for temporal representation. Hence, our model will provide an independent point trajectory for every pixel as mentioned in above TRMSE explanation. Considering this situation, we can directly sample corresponding ground truth at given moment for error measurement. In fact, the parametric model have been constructed when all control points are decided. Thus, for an arbitrary timestamp $t_k$, we could sample from this continuous curve and obtain the estimate $F(t_k)$. *** **Questions** *** *1.Trajectory Inference:* - **Response:** Thanks for the comment. As mentioned in our paper, the model generates all control points of B-splines for every pixel at once. According to the B-spline formulation, we only need a timestamp and could get corresponding flow vector relative to the reference time. Then, the flow vector of all the pixels in the successive images (2nd, 3rd, ...) in the sequence will be generated correspondingly. In our setting, the sequence of frames will be allocated a normalized timestamp calculated by division of frame index and sequence length. Besides, our model will directly infer long-range correspondences and not dependent on step-by-step iteration like classic optical flow limited to adjacent frames. *2.Pixel Independence:* - **Response:** Thanks for the comment. In our paper, every single pixel's continuous flow is computed *independently*. Thus, whether multiple different trajectories collapse to a single pixel in some image does not matter any all. Interestingly, this will happen why occlusion is observed in 3D space. *** **Limitations** *** *1.Assessment of work:* - **Response:** Thanks for the valuable comment.Through our method, stable continuous optical flow will provide realistic spatial-temporal corresponding priors for vision tasks such as non-rigid structure-from-motion, simultaneous localization and mapping, and video understanding of dynamic scenes. On the potential social impact, as the reviewer mentioned, continuous pixel motion trajectory could generate realistic dynamic change utilizing a few sequences and form a novel video. Hence, this technique can potentially make fake video with image generative framework and bring more challenges for official regulatory agencies. We will further elaborate the social impact as the reviewer suggested in details. *** *2.B-spline Limitations:* - **Response:** Thanks for the valuable comment. We agree that real-world pixel motion could be rather complex. Consequently, for polynomial curves controlled by the only parameter *degree*, the flexibility will be limited. In this paper, we choose B-spline as our fitting model because the shape of this curve is additionally dependent on adjustable control points regressed by our end-to-end framework. By adjusting certain control points, the local segment of the curve will be be changed with higher flexibility. Even though flexible spline curves like B-spline could achieve improved ability in handling complex motions, we still have to admit the limitation in *all parametric models*, where a strong explicit constraint potentially leads to better performance on those more similar motion scenery with the training set. --- Rebuttal Comment 1.1: Comment: I appreciate the answers of the authors to my questions. After reading the other reviews and comments I decided to change my assessment. Using a parametrised motion model for OF is not new, but using a neural model of B-Splines is to a certain extend novel to the vision community. However, I have to admit that the limitations and the broader impact of this work needs to be elaborated in its own section in the paper. Following the conference code of conduct, ethical and societal impacts must be addressed adequately in the paper. --- Reply to Comment 1.1.1: Comment: **Response:** We would like to thank the reviewer for the prompt and insightful comments. We are glad to hear the reviewer decided to change the assessment. Here we would like to rephasize our main contributions. **First**, the novel concept of **continuous optical flow** is proposed to describe dense and continuous pixel motion with an explicit parametric B-spline curve. **Second**, the combination of neural optimization and neural ODEs (ODE-ConvGRU) for optical flow computation is also a novelty as agreed by the reviewers in solving the problem. **Third**, a specific evaluation system for this blank field is proposed with valid training based on an independent simulation dataset. In terms of the **limitations and broader ethical or social impacts**, we have mentioned them in the last part of our paper with relatively short statements and we will elaborate them in the revised version for a comprehensive discussion. As for the limitations, our model with parametric curves still faces challenges in capturing abrupt and complex displacements due to the trajectory smoothness and motion priors from the training datasets. We believe that the ethical concerns of our method, training and evaluation process are minimal and there is no harm or bias to anyone. All weight parameters rely on learning from valid synthetic data [1] with PII and real data utilizing for evaluation comes from TAP-Vid [2] with clear statement for the fairness and unbiasedness of these benchmarks. We also consider that the high-quality image generative model will potentially utilize continuous flow for more realistic video generation which could bring risks to the regulation of information source for social media. Thus, the implementation of whole algorithm will be authorized only for scientific purposes in the future. *** [1] Klaus Greff, Francois Belletti, Lucas Beyer, Carl Doersch, Yilun Du, Daniel Duckworth, David J Fleet, Dan Gnanapragasam, Florian Golemo, Charles Herrmann, et al. Kubric: A scalable dataset generator. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3749–3761, 2022. [2] Carl Doersch, Ankush Gupta, Larisa Markeeva, Adria Recasens Continente, Kucas Smaira, Yusuf Aytar, Joao Carreira, Andrew Zisserman, and Yi Yang. Tap-vid: A benchmark for tracking any point in a video. In NeurIPS Datasets Track, 2022, 35: 13610-13626.
Summary: This paper presents a parametric representation of dense and continuous pixel motion over arbitrary time intervals. The ``continuous parametric flow`` concept is interesting. However, one of the core technique contributions is encoding the image with the ODE-ConvGRU, which is not closely related to the ``continuous parametric flow`` and simply replaces CNNs used in PIPs and RAFT. Strengths: 1. The concept ``continuous parametric flow`` is interesting. 2. ODE-ConvGRU is a stronger image feature encoder compared with CNNs. 3. The B-spline-based flow interpretation is novel. Weaknesses: 1. The comparison is unfair. The proposed model is trained with $N_{gt}=8$ but the PIPs is only trained with 4-frames. Can you provide the performance of an 8-frame PIPs model and the officially released PIPs model? 2. What does the ``implicit`` mean in the ``implicit feature representation``? For me, it's just a feature encoded by the ODE-ConvGRU instead of a CNN. I don't think it's necessary to introduce the ``implicit`` concept. 3. What's the relationship between the continuous flow with ODE-ConvGRU? 4. The two-stage evaluation is not novel, and should not be regarded as a contribution. 5. Figure 2 should be improved. The symbols in the ``cost volume`` and ``correlation pyramid`` regions are not well aligned. The legend of the concatenation is weird. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See Weakness. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Limited by the chosen parametric model, the predicted point trajectory is challenging to express complex motions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weakness:** *** *1.Unfair Comparison:* - **Comment:** The comparison is unfair. The proposed model is trained with $ N_{gt} = 8 $ but the PIPs is only trained with 4-frames. Can you provide the performance of an 8-frame PIPs model and the officially released PIPs model? - **Response1:** Thanks for the comment. In terms of the training setting, we expect to utilize a few sampling frames as inputs and achieve trajectory estimation through continuous optical flow estimation. **All baselines including PIPs and our model use the same e.g. 4 sampling frames as inputs**. However, as the reviewer mentioned, the number of supervision moments $N_{gt}$ is a different concept from input frames. According to our assumption,**all moments in certain videos are valid for our continuous model.** Hence, the supervision can be randomly selected from these sampling or un-sampling timestamps with real spatial-temporal corresponding. But for the discrete point-tracking methods, the supervision is limited to selected from sampling moments, which causes **the supervision number is equal to inputs**. - **Response2:** As the reviewer mentioned, the official implementation of PIPs is an 8-frame inputs model, we will use this PIP model for comparison with our method with 8-frame inputs. **The performance on real-world benchmarks is shown in Table 1 in the attached PDF**. As the table reports, our 8-frame model still outperforms PIPs with the same inputs in almost all inference ranges. Even though more inputs will provide more context information and improve accuracy and robustness at sampling moments for both methods, our proposed continuous parametric flow seems more stabel and smooth at non-sampling moments especially during the long-term blind time and maintain tracking process through occlusion. *** *2.Implicit Representation:* - **Comment:** What does the implicit mean in the implicit feature representation? For me, it's just a feature encoded by the ODE-ConvGRU instead of a CNN. I don't think it's necessary to introduce the implicit concept. - **Response:** Thanks for the comment. In our paper, the *implicit* concept is used as the counterpart to explicit parametric constraints. The implicit features generated by ODE-ConvGRU aim to aggregate continuous spatial-temporal information and this could be understood as a kind of *memory* for a continuous video sequence similar to the video compression work Nerv[1]. *** *3.Relations between Continuous Flow and ODE-ConvGRU:* - **Comment:** What's the relationship between the continuous flow with ODE-ConvGRU? - **Response1:** Thanks for the comment. We employ the ODE-ConvGRU unit to generate temporally continuous features defined at arbitrary timestamps by utilizing a handful of image embedding at sampling moments. These features will be used with random sampling for the construction of multi-time cost volume pyramids, which eventually guide the regression of control points in an iterative manner. Through these control points, every flow trajectory will be represented with an explicit B-spline curve. - **Response2:** The key part is how to obtain spatial-temporal features at *all moments* by limited temporal-free embedding at sampling moments. The first step is to aggregate all sampling features (the abbreviation of spatial embedding at sampling moments) for a suitable initial value in solving an ordinary differential equation (ODE). ConvGRU is used for this purpose. Neural ODE aims to compute feature differential by independent neural network and finally obtain continuous features by adaptive built-in ODE solvers like Euler or Runge-Kutta method. *** *4.Two-Stage Evaluation:* - **Comment:** The two-stage evaluation is not novel, and should not be regarded as a contribution. - **Response:** As stated in our contribution, the two-stage evaluation pipeline is designed for continuous flow estimation as there is no related work for this area. In fact, we hope continuous flow should equip with two attributes: one is the fitting ability at non-sampling or blind moments, and the other is for arbitrary time-to-time correspondence. Considering that the candidate moments for the continuous task are unlimited, we have to validate our method by the above two sub-tasks. We believe this also makes a contribution to the community. *** *5.Present Improvement:* - **Comment:** Figure 2 should be improved. The symbols in the cost volume and correlation pyramid regions are not well aligned. The legend of the concatenation is weird. - **Response:** Thanks for the suggestion. We will modify the legends of Figure 2 and ensure the aesthetic of the whole figure. The revised Figure 2 is provided in the attached PDF. *** **Limitations:** *** *1.Parametric Model:* - **Comment:** Limited by the chosen parametric model, the predicted point trajectory is challenging to express complex motions. - **Response:** Thanks for the comment. We agree that all parametric models depend on motion prior in special scenes and appear limited to real-world complex motions. However, different from polynomial curves controlled by the only parameter *degree*, in this paper, we choose B-spline as our fitting model because the shape of this curve is additionally dependent on adjustable control points regressed by our end-to-end framework and partition knots. By adjusting certain control points, the local segment of the curve will be flexibly changed. Due to this partition optimization, B-spline enables higher flexibility in handling complex motions. *** [1] Chen H, He B, Wang H, et al. Nerv: Neural representations for videos[J]. Advances in Neural Information Processing Systems, 2021, 34: 21557-21568.
Rebuttal 1: Rebuttal: **Global Response** In the attached PDF, we provide two figures and one table. Table 1 reports the comparison with PIPs with 8-frame inputs on real-world datasets. Figure 1 shows Special Cases of extremely Small Motion & Large Motion. Figure 2 illustrated the updated pipeline of our proposed framework. Pdf: /pdf/14fef4884e42ca2402b5d1b9a11171e7a672adb2.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: A temporally continuous parametric optical flow method, based on B-splines, is presented. The proposed network takes as input L frames (and timestamps) and outputs a tensor of size 2NxHxW, i.e N control points for each pixel. In practice N=6. The network architecture relies on neural ODE, ConvGRU and multi-time correlation pyramid. The proposed method is evaluated against RAFT and PIPs and a synthetic dataset (which is a contribution of this paper) and two real datasets (Vid-DAVIS and Vid-Kinetics). An ablation study is also presented. Strengths: 1. The idea of using B-splines to model a temporally continuous parametric optical flow, is simple and efficient. 2. The proposed architecture is not trivial and well thought out. 3. The method outperforms both RAFT and PIPs. Weaknesses: 1. The paper contains many typos, e.g. l.152 feed - > fed, l. 154 nerual, l. 251 will, etc.). 2. Some results are strange, for instance, in Table 2, PIPs and ODE-6spline have the exact same TRMSE on Vid-DAVIS. 3. In Table 1, PIPs outperforms ODE-6spline in terms of TRMSE at 16f on the Query-Stride set (11.27 vs 11.38) but it is not in bold. 4. Using ODE-ConvGRU does not seem new (it is said it was proposed in [49]). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please answer to the above "weaknesses". I am currently recommending to reject the paper as I feel the contribution is rather limited for NeurIPS. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: . Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weakness** *** *1.Typos*: - **Comment:** The paper contains many typos, e.g. l.152 feed - > fed, l.154 neural, l.251 will, etc. - **Response:** Thanks for pointing out the typos. We will fix all the typos and further proofread the manuscript. *** *2.Data Error*: - **Comment:** Some results are strange, for instance, in Table 2, PIPs and ODE-6spline have the exact same TRMSE on Vid-DAVIS. - **Response:** Thanks for pointing out this issue. We are sorry that we unexpectedly copy erroneous TRMSE results from raw data, displaying exactly the same performance in PIPs and ODE-6spline. Here we show the corrected tables as below. As the table reported, the TRMSE metric of ODE-6spline in fact outperform PIPs and RAFT in all inference range of the TAP-DAVIS benchmark and appear similar trend in other benchmarks. Table. Updated TRMSE metric of ODE-6spline in Vid-DAVIS benchmark | Method | Metric | Benchmark | 20f | 24f | 28f | 32f | |:------------------:|:------:|:---------:|:---------:|:---------:|:---------:|:---------:| | ODE-6spline (Ours) | TRMSE | Vid-DAVIS | **12.96** | **15.76** | **18.12** | **20.27** | --- *3.Omission Mark*: - **Comment:** In Table 1, PIPs outperforms ODE-6spline in terms of TRMSE at 16f on the Query-Stride set (11.27 vs 11.38) but it is not in bold. - **Response:** Thanks for pointing out the issue. We agree with the reviewer that PIPs slightly outperform ODE-6spline in terms of TRMSE metric at 16f on Query-Stride set in Table 1. We will update Table 1 correspondingly as suggested. Note that our method still shows comparable performance at 16f and better temporal smoothness in other inference scales. --- *4.Contribution Clarification*: - **Comment:** Using ODE-ConvGRU does not seem new (it is said it was proposed in [49]). - **Response:** Thanks for the comments. Here would like to further clarify the main contributions of our work (as been stated in the main manuscript). In the paper, we present *continuous parametric optical flow*, which is a parametric representation of dense and continuous motion over arbitrary time intervals. This contribution has been acknowledged by all the reviewers (Reviewer dH6J ``the concept of continuous parametric flow is interesting``, Reviewer sWYx ``seems novel``, Reviewer JXsp ``the idea of estimating continuous parametric flow is interesting``, and Reviewer TYqX ``the idea of estimating continuous optical flow from a neural network seems relatively novel``).Note that Reviewer 7yvr also marked our idea in developing continuous parametric optical flow as ``simple and efficient `` and ``the proposed architecture is not trivial and well thought out ``. In solving the problem, we utilize ODE convolutional GRU (ODE-ConvGRU) to encode implicit continuous features and establish multiple cost volumes for refinement. This combination of neural optimization and neural ODES for optical flow computation also makes novelty as agreed by the reviewers. In this paper, we employed the ODE-ConvGRU unit for our novel problem and solution framework while we have not claimed the architecture design of ODE-ConvGRU as our contribution. Note that, the usage strategy and objective of ODE-ConvGRU in this paper also differs from existing work such as Reference [49]. In our paper, the ODE-ConvGRU encoder will generate continuous spatial-temporal features defined at specific timestamps rather than discrete frames from a few visible inputs and provide targets for multi-time cost volume construction. --- Rebuttal Comment 1.1: Title: Response Comment: Hello, Thank you for your answers. Even if the paper is technically sound, I still find the contributions limited for a conference like NeurIPS. Nevertheless, I can see that I am the only one recommending to reject the paper. As a consequence, I will keep my initial rating but I will not fight against the other reviewers if they keep recommending to accept the paper. Best regards, 7yvr --- Reply to Comment 1.1.1: Title: Detailed comments Comment: We thank Reviewer 7yvr’s efforts in the reviewing process. Thanks for acknowledging the our contributions as “technically sound”,“The idea of using B-splines to model a temporally continuous parametric optical flow, is simple and efficient”, “The proposed architecture is not trivial and well thought out” and “The method outperforms both RAFT and PIPs.”. In the rebuttal, we have addressed all the comments with clear **explanation**, **numerical analysis** and **illustration**. If the reviewer still have **any** detailed concern with respect to our submission, please raise any comment and we are happy to discuss. The Authors
null
null
null
null
null
null
Generalizable Lightweight Proxy for Robust NAS against Diverse Perturbations
Accept (poster)
Summary: The paper concerns the automatic generation of architectures that are robust to diverse perturbations. Neural architecture search (NAS) has been used for the automatic generation of such architectures, but the paper notes that most of those architectures are dedicated to the clean accuracy, which leaves the resulting architectures unprotected against (adversarial) perturbations. One of the reasons that the previous works have focused on clean accuracy is that making the architecture robust to perturbations is costlier and results in methods that are computationally heavy. This is precisely the gap the paper intends to fill by proposing a lightweight way to obtain robust architectures. Concretely, it considers a "zero-cost proxy" that considers the consistency across features, parameters and gradients of clean/perturbed images. The exact heuristic, called croze, is provided in eq. 10 and accounts for the aforementioned consistency across features, parameters and gradients. The method is empirically validated in small scale datasets of cifar10, cifar100 and imagenet16. **Update**: I am thankful to the authors for their answers. I strongly believe that the discussion and points below should be included in the camera-ready version, along with the results on accuracy. Practitioners care about the accuracy in both clean and robust accuracy and this is the metric that should be compared with standard benchmarks. In addition, the insights reported during the rebuttal period might be interesting to the reader, therefore I would urge the authors to include the relevant insights in the paper. Strengths: NAS is an important way to discover new architectures and as such I believe the paper is relevant to the community. In addition, the paper proposes a method to obtain robust architectures, which is a topic of intense research in the ML community. In addition, I find the figures 1,2 quite clear in the message they want to convey. Having said that, the paper raises a lot of questions and could be improved (see below). Weaknesses: The paper is not entirely clear in a number of paragraphs and legends. For instance, in fig. 2 the CC Accuracy and HRS accuracy are not referred to in the introduction, so they leave the reader searching or wondering about them. In addition, the related work with respect to train-free NAS is slightly outdated and more methods do mitigate the issue the paper raises, i.e. the computational cost. For instance, the papers [1-3] are directly relevant. It is also recommended to include those in the related experimental validation. What is more, I find the experimental validation slightly unusual, since typically in recognition we are interested in the clean/robust accuracy, especially when comparing with other methods. Minor: The paper would benefit from proofreading since there are several mistakes or phrases that are unclear. For instance, "the gradients similarity as an evaluating", "poorer correlations" (line 263), "using a less number of sampled words". [1] NASI: Label- and Data-agnostic Neural Architecture Search at Initialization, ICLR’22. [2] Knas: Green neural architecture search, ICML’21. [3] Generalization Properties of NAS under Activation and Skip Connection Search, NeurIPS’22. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: The paper mentions that previous NAS approaches paid less attention to robustness. "This can result in [...] limiting the practical deployment of NAS in real-world safety-critical applications". There are no citations or references about such safety-critical applications for NAS, so I am wondering what those applications are. The paper claims that "to have a correct prediction on the perturbed input x', the model needs to extract similar features between x' and x". Is there any proof that this is a necessary condition? Otherwise, I find this a strong claim that is nontrivial to me. In addition, I am wondering whether eq. 2 is a sufficient condition and whether there is a reference for this, or what the norm is. In sec. 3.3 there are a number of unclear claims to me. Firstly, why should the features from two different functions $f_{\theta}$ and $f_{\theta^r}$ be similar? Secondly, what is the $z_m$ in eq. 5? Is it the output of $e_{\psi}$ from sec. 3.1? Thirdly, in high-dimensions, aren't the features very likely to be (near) orthogonal? So, I am not sure what the eq. 5 can capture in realistic networks that have representations of thousands or millions dimensions. The paper mentions that "Accordingly, since the higher similarity of single-step updated parameters may promote the model to converge to an identical or similar parameter space for both tasks, we evaluate the parameter similarity as one of our proxy terms as follows:". I am wondering what the last part, i.e. proxy term, means in this context. How does this proxy term mitigate the drawback mentioned in the previous sentence? As I understand, the final cost is the one in eq. 10, but I am wondering why the multiplication is selected for this. Is there any intuition for this? The reference [17] mentioned as a benchmark is only evaluating the robustness at test-time, right? The architectures are trained with clean training. What is the 0.747 in the number of parameters (e.g. in table 1)? Is this million parameters? This is not clear to me. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The limitations are mentioned in the supplementary. The text "while existing NAS frameworks require enlarged computational resources when utilizing larger datasets such as ImageNet, our method requires constant computational resources that are unaffected by the specific tasks" mentions ImageNet, but this paper does not do search on ImageNet. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1. A number of paragraphs and legends are not clear.** * We will clarify that CC refers to the Common Corruption dataset, and HRS accuracy stands for Harmonic Robust Score, which is the harmonic average of clean and robust accuracy in the paper, and change the names accordingly. --- **W2. Prior train-free NAS works are outdated. NASI, Knas, and Eigen-NAS should be included.** * We will include NASI, KNAS, and Eigen-NAS in the related work. Moreover, **in Table R5 of the PDF, we verified the effectiveness of our proxy** in identifying the robust architecture compared to those works. CRoZe **exhibits consistently the highest clean and robust accuracies** on CIFAR10 and CIFAR100 compared to recent train-free NAS works, **outperforming the best baseline (KNAS) by 0.32% and 4.26%** for clean and average robust accuracies on CIFAR10. --- **W3. Experimental validation seems unusual.** * Our approach aims to seek an architecture that performs well on both clean and perturbed inputs. Thus, we validate our proxy by measuring the correlation between our proxy score at initialization and the final performance of architectures from the search space. **This follows the protocol used in prior works on zero-cost proxy[1,2]**. * **In Table 3, we present results that demonstrate the superiority of our proxy in terms of clean and robust accuracy, following an end-to-end evaluation similar to [3]**. Moreover, in **Table R2 of PDF**, we show end-to-end NAS results in adversarial training settings, achieving the **highest average robust accuracy of 55.77% with 15 times less search cost** compared to AdvRush. From these experimental validations, we believe our proxy is able to find robust architecture rapidly. --- **W4. Paper needs proofreading.** * We will incorporate the corrections in the revised version. - line 188: Thus, we employ gradient similarity as a means of evaluating the ~. - line 263: lower correlations --- **Q1. No citations on safety-critical applications for NAS.** * NAS[4,5] is used to find the optimal architecture for various tasks of real-world applications, which may be safety-critical, such as autonomous driving systems [6]. If NAS does not consider robustness in such a setting, the resulting system may be vulnerable to perturbations. --- **Q2. Is there evidence that the model needs to extract similar features against clean and perturbed images for correct prediction?** * Our proxy is designed on top of the theoretical understanding that robust architectures learn invariant features against clean and perturbed images [7,8,9], irrespective of type of the perturbations applied. * We assume that 1) the neural network is continuous and 2) semantic-preserving perturbations satisfying $||x^{’}-x||<\delta$ hold true, where $\delta$ is sufficiently small bound. We will clarify that $||.||$ is the $L_p$ norm. --- **Q3. Why should features from two different functions be similar? Is eq.5 the output from sec 3.1? Does eq.5 capture realistic networks with high dimensions?** * Since the two functions represent the clean surrogate network (randomly initialized) and the robust surrogate network (perturbed weights), both within the same architecture, we can calculate the feature difference between the clean and robust model. This proxy term is inspired by how robust models extract perturbation-invariant features from clean and robust inputs. * $z_m$ is the output vector of the $m^{th}$ layer of $f_\theta$. We will clarify this in the manuscript. * Even in high dimensions, Eq.5 can capture whether the model can extract invariant features from clean and perturbed inputs. Specifically, features from clean and perturbed images with a dimension of 512 show a high similarity of 0.983, which is not orthogonal. --- **Q4. Exact context of lines 183-185.** * The proxy term is used to evaluate the parameter similarity to estimate how the model will converge to an identical or similar parameter space for both tasks. By measuring the similarity from the single-step updated parameters, we can assess how well the convergence process is likely to proceed. --- **Q5. Why the multiplication is selected for the proxy?** * As shown in Figure 1 (b), we believe each component should be proportional to the final cost in eq.10. Therefore, we employ the multiplication of each component for the final proxy. --- **Q6. The referenced benchmark evaluates the robustness only at test-time?** * Yes. To provide a more comprehensive robust evaluation, we further provide results (Right, Table 1) with robustness against diverse perturbations of adversarially-trained architectures from the subset of the NAS-Bench-201 search space. --- **Q7. Meaning of 0.747 in table 1.** * The value 0.747 represents **Spearman’s rank correlation coefficient value** between the rank derived from the number of parameters and the rank derived from the final validation accuracies. --- **L1. The paper mentions ImageNet but no experiments on that.** * **We provide the experimental results on ImageNet 16-120 in Table 2**. ImageNet16-120 is a widely used benchmark in NAS works [1,2,10], which is a downsample variant of ImageNet with 151.7K training instances for 120 selected classes. We will update the paper to explicitly refer to ImageNet16-120 in the revised version to ensure clarity. --- [1]Zero-Cost Proxies for Lightweight NAS\ [2]Neural Architecture Design and Robustness: A Dataset\ [3]Meta-prediction Model for Distillation-aware NAS on Unseen Datasets\ [4]PC-DARTS: Partial Channel Connections for Memory-Efficient Architecture Search\ [5]Once-for-All: Train One Network and Specialize it for Efficient Deployment\ [6]Robust Physical-World Attacks on Deep Learning Visual Classification\ [7]Feature Denoising for Improving Adversarial Robustness\ [8]Adversarial Examples Are Not Bugs, They Are Features\ [9]Explaining and Harnessing Adversarial Examples\ [10]NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search --- Rebuttal Comment 1.1: Title: Response to the authors Comment: Dear authors, I am thankful for your responses. However, I still have questions that are not addressed from the rebuttal: * The references [4, 5] mentioned above do not address the claim "This can result in [...] limiting the practical deployment of NAS in real-world safety-critical applications". Are there any references for this? * It's still not clear to me why the main tables are focusing on correlations other than clean/robust accuracy. I notice the same applies above to some of the responses. Could the authors elaborate on that? * In addition, this response is unclear to me: "To provide a more comprehensive robust evaluation, we further provide results (Right, Table 1) with robustness against diverse perturbations of adversarially-trained architectures from the subset of the NAS-Bench-201 search space". Are there any details on the paper about this subset? How was this selected? * Are there any insights from the components selected by NAS? --- Reply to Comment 1.1.1: Comment: Dear Reviewer, We deeply appreciate the time and effort you've dedicated to reviewing our paper. During the remaining discussion period, we will do our best to address your concerns. We provide detailed responses for your additional concerns in the below. If you have any further concerns about our work, do not hesitate to share your comments. We would be delighted to address any additional questions or concerns you may have. Best,\ Authors --- **Q1. Are there any citations on safety-critical applications for NAS?** \ For clarification, we elaborate our motivations, and necessity of robust NAS with the detailed references. NAS [4,5] searches for high-performing neural architectures tailored for specific tasks or datasets within a constrained computational budget. Thus, NAS can be applied to a broad range of applications, including safety-critical ones such as object detection [2] of autonomous driving system [1], image recognition [3] of manufacturer system [6], mobile systems [7], speech recognition system [8] or medical imaging systems [9]. However, the need for robust NAS [10,11,12] has recently emerged, and prior works often overlooked the importance of robustness during neural architecture searches. Consequently, the vulnerability of neural architectures discovered by previous NAS methods is inevitable [13,14] when faced with diverse perturbations [15,16,17]. [1] Autonomous Driving with Deep Learning: A Survey of State-of-Art Technologies\ [2] Progressive differentiable architecture search: Bridging the depth gap between search and evaluation.\ [3] Neural architecture search with reinforcement learning.\ [4] PC-DARTS: Partial Channel Connections for Memory-Efficient Architecture Search\ [5] Once-for-All: Train One Network and Specialize it for Efficient Deployment\ [6] Using Deep Learning to Detect Defects in Manufacturing: A Comprehensive Survey and Current Challenges\ [7] Mnasnet: Platform-aware neural architecture search for mobile.\ [8] Nas-bench-301 and the case for surrogate benchmarks for neural architecture search.\ [9] NAS-Unet: Neural architecture search for medical image segmentation.\ [10] Neural Architecture Design and Robustness: A Dataset\ [11] Neural Architecture Search: A Survey\ [12] Neural Architecture Search: Insights from 1000 Papers\ [13] On the security risks of automl\ [14] AdvRush: Searching for Adversarially Robust Neural Architectures\ [15] Benchmarking Neural Network Robustness to Common Corruptions and Perturbations\ [16] Towards Deep Learning Models Resistant to Adversarial Attacks\ [17] Robust Physical-World Attacks on Deep Learning Visual Classification\ --- **Q2 Could the authors elaborate on why the main tables are focusing on correlation?**\ To address your concerns, **we will modify the main table with Table 3 which shows the final clean/robust accuracy** of the architecture searched by our proxy in the DARTS search space. Additionally, we provide **experimental results in the NAS-Bench-201 search space with final clean/robust accuracy in Table R5 of the PDF file** (Summarized in the following Table). However, we kindly ask you to note that **rank correlation is also an important metric, widely used in train-free NAS studies [1,2,3,4]**. The primary goal of the zero-cost proxy is to rapidly and accurately estimate the final performance of a neural architecture at the initialization state. Therefore, the rank correlation between the final performance and the value of the proxy effectively demonstrates how well our proxy is designed to search for high-performing architectures and its applicability to diverse NAS tasks. **[NAS-Bench-201 search space]** | |Standard-Trained| | | | |-|-|-|-|-| |Proxy|Clean|FGSM|PGD|CC.| |NASWOT|92.96|59.90|41.70|35.19| |NASI(T)|93.08|62.60|41.10|34.99| |NASI(4T)|93.55|64.90|44.00|36.12| |Eigen-NAS|93.46|59.60|36.80|36.75| |KNAS|93.38|63.80|44.90|34.54| |CRoZe|**93.70**|**68.00**|**48.20**|**38.83**| **[DARTS search space]** | | |Standard-Trained| |Adversarially-Trained| | | |-|-|-|-|-|-|-| |NAS Type|Method|Clean|Rob.(FGSM)|Clean|Avg Rob.(PGD, CW, SPSA, LGV, AutoAttack)|Search Cost (GPU sec)| |Clean one-shot|DrNAS|94.64|13.96|86.45|55.60|46857| |Robust one-shot|AdvRush|94.80|16.17|85.98|55.57|251245| |Clean zero-shot|GradNorm|92.84|15.55|81.61|51.61|9740| |Clean zero-shot|SynFlow|90.41|10.59|77.08|52.96|10138| |Robust zero-shot|CRoZe|94.45|**22.38**|85.05|**55.77**|17066| [1] Zero-Cost Proxies for Lightweight NAS\ [2] Neural Architecture Design and Robustness: A Dataset\ [3] NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search
Summary: The paper proposes a lightweight approach to generate new architectures with robustness formulated in the NAS process. The paper claims that the proposed method is capable of generating architectures that can learn generalized features with higher robustness. Strengths: Efficient algorithms for generating robust architectures are an important topic in NAS field and the paper focuses on an important topic. The proposed method is simple but effective based on the experimental results. Weaknesses: While I like the approach I think the experimental results are limited to show the effectiveness of the proposed method properly. 1- Most of the experiments are based on the FGSM and as mentioned in the paper it is considered as the worst adversarial attack currently. 2- I can see one experiment on PGD in the paper but it is very limited. 3- I was hoping to see some comparison with the well-known human-made architectures (like ResNet) for both clean and robust accuracy. 4- Having more comprehensive experiments on more SOTA adversarial attacks like AutoAttack would help to show the effectiveness of the proposed method better. 5- It has been claimed that the proposed method is "Generalized to Diverse Perturbations" but I think there is not enough evidence and more experiments should have been done to make it justice. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The main question is whether it is possible to provide a more comprehensive evaluation to show the effectiveness of the proposed method. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Based on the current experimental results it seems the effectiveness of the proposed method is very limited Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1, 2. Experimental results are limited to FGSM and PGD.** - First, we would like to clarify that we report experimental results on three types of perturbations, including **FGSM, PGD, and 16 types of common corruption (Table 1, 2, 3)**. However, following your suggestion, we further provide additional experimental results against a wide range of recent adversarial attacks, including **CW, DeepFool, SPSA, LGV, and AutoAttack** on the NAS-Bench-201 search space in **Table R1 in the PDF file**. Specifically, for the accurate evaluation of the robust accuracy, we adversarially train each candidate architecture in the NAS-Bench-201 search space on CIFAR-10 following the protocol from [3]. Then, we subsequently evaluate the robustness of those models against diverse adversarial attacks. - **Our proxy shows the highest overall correlation of 0.399 for robust accuracies** while the best baseline (GradNorm) achieves 0.352, which demonstrates that our proxy can rapidly search for a neural architecture capable of learning robust features against diverse attacks. (*Detailed table can be found in Table R1 in PDF*) |Proxy Type|FGSM|PGD|CW|DeepFool|SPSA|LGV|AutoAttack|Avg.| |-|-|-|-|-|-|-|-|-| |FLOPs|0.357|0.446|0.189|0.364|0.196|0.347|0.365|0.323| |GradNorm|0.378|0.446|**0.264**|0.421|0.149|0.401|0.405|0.352| |NASWOT|0.311|0.354|0.240|0.250|0.197|0.265|0.280|0.271| |CRoZe|**0.441**|**0.532**|0.220|**0.454**|**0.240**|**0.449**|**0.458**|**0.399**| ------------ **W3. Comparison with human-made architecture is needed.** - Following your valuable suggestion, we additionally provide comparisons with well-known human-made architectures with **similar parameter sizes, such as Conv4, Conv6, MobileNetV2, and ResNet12**. The neural architectures identified by our proxy even show **better clean and robust accuracy with fewer parameters than ResNet12**. - Specifically, our method achieves **2.11% and 2.35% higher clean and robust accuracies (PGD-20) with 2.48MB fewer parameters compared to ResNet12**. All models are adversarially trained following the conventional protocol [1] on CIFAR-10 and evaluated against PGD [1]. (*Detailed table can be found in Table R4 in PDF*) | | # Params (MB)|Clean|PGD-20|HRS| |-|-|-|-|-| |Conv4|0.03|60.12|29.98|40.01| |Conv6|0.05|65.92|33.04|44.02| |MobileNetV2|2.24|66.04|33.04|46.79| |ResNet12|8.00|82.94|49.69|62.15| |CRoZe|5.52|**85.05**|**52.04**|**64.57**| ---------- **W4. Experimental results against AutoAttack would be helpful to demonstrate its effectiveness.** - We evaluated against SOTA adversarial attacks, including **AutoAttack, SPSA, LGV, DeepFool, and CW in Table R2 and Figure R1 in the PDF file**. All architecture searched by each method on DARTS search space is adversarially trained following [1] on CIFAR-10. - Architecture that is identified by our proxy shows **the highest average robust accuracy of 55.77%** against various adversarial perturbations, even with **15 times more efficient search cost** compared to the AdvRush. |NAS Type|Method|Clean|PGD-20|CW|SPSA|LGV|AutoAttack|Avg. Rob.|Search Cost (GPU sec)| |-|-|-|-|-|-|-|-|-|-| |Clean one-shot|DrNAS|86.45|54.66|11.39|85.73|79.82|52.40|55.60|46857| |Robust one-shot|AdvRush|85.98|53.89|6.68|80.85|79.61|51.88|55.57|251245| |Clean zero-shot|GradNorm|81.61|49.86|12.02|77.19|73.27|46.69|51.61|9740| |Clean zero-shot|SynFlow|77.08|45.95|26.50|75.78|74.14|42.45|52.96|10138| |Robust zero-shot|CRoZe|85.05|52.04|16.82|83.23|77.62|49.15|**55.77**|17066| ----------------------------------- **W5. Not enough evidence for claim of "Generalized to Diverse Perturbations".** - We clarify that our model demonstrates its ability to generalize to **7 types of adversarial attacks (Table R1, R2 in the PDF file) and 16 types of common corruption perturbations (Table 1, 2) with diverse search space**. Especially, simple version of the results of Spearman's rank correlation evaluation with adversarially-trained NAS-Bench-201search space against recent adversarial attacks (Table R1) is presented in the below. - Therefore, we believe our approach is effective in searching for neural architecture that is capable of learning generalizable robust features against diverse perturbations. - Through extensive experiments on NAS-Bench-201 and DARTS search space against a wide range of perturbations in both standard (Table 1,2) and adversarial (Table 1, R1, R2) training settings, we validate our approach, as noted by reviewer m6P2. |Proxy Type|CW|DeepFool|SPSA|LGV|AutoAttack|Avg.| |-|-|-|-|-|-|-| |GradNorm|**0.264**|0.421|0.149|0.401|0.405|0.352| |NASWOT|0.240|0.250|0.197|0.265|0.283|0.271| |CRoZe|0.220|**0.454**|**0.240**|**0.449**|**0.458**|**0.399** | ----------------------------------- **Question/Limitation. A more comprehensive evaluation is necessary to validate the effectiveness of the approach.** - Thanks to your suggestions, in addition to FGSM, PGD, and common corruption (Table 1,2,3), we further conducted experiments on a wide range of attacks including **CW, DeepFool, SPSA, LGV, and AutoAttack** on NAS-Bench-201 search space (Table R1) and DARTS search space (Table R2). - Moreover, we demonstrate that our proxy can rapidly search for neural architectures capable of learning generalizable representations against diverse perturbations in both standard (Table 1 left, 3,2) and adversarial settings (Table 1 right, R1, R2). We strongly believe that these additional experimental results show the clear effectiveness of our method against a diverse set of attacks, which further strengthens our work and its potential practical impact. Thank you for the valuable suggestion. [1] Madry et al., Towards Deep Learning Models Resistant to Adversarial Attacks, ICLR 2018 \ [2] Croce et al., Reliable Evaluation of Adversarial Robustness with an Ensemble of Diverse Parameter-free Attacks, ICML 2020\ [3] Wong et al., Fast is better than free: Revisiting adversarial training, ICLR 2020 --- Rebuttal Comment 1.1: Title: Gentle Reminder Comment: Dear Reviewer, This is a gentle reminder for the discussion period. We have addressed all of your initial concerns and provide additional experimental results in our previous responses. We sincerely hope to discuss our work and look forward to your constructive comments. For your convenience, we provide the **short summary** of our previous responses to your initial concerns. Detailed responses can be found in the previous responses. Best,\ Author --- > ### Summary >- **Experimental results are limited to FGSM and PGD. Experimental results against AutoAttack are needed.** > - (Shown through additional experiment in Table R1) Our method achieves **the highest overall correlation of 0.399 for average robust accuracies (CW, DeepFool, SPSA, LGV, and AutoAttack)** while the best baseline shows 0.352. > - (Shown through additional experiment in Table R2, Figure R1) Our proxy demonstrates **the highest average robustness of 55.77%** against various attacks (CW, LGV, SPSA, and AutoAttack) with **15 times less search cost** compared to baseline. >- **Comparisons with human-made architecture are needed.** > - (Shown through additional experiment in Table R4) Under adversarial training, our method achieves **2.11% and 2.35% higher clean and robust accuracies** (PGD-20) with 2.48MB fewer parameters compared to ResNet12. >- **Not enough evidence for the claim of ‘generalized to diverse perturbations’.** > - (Shown through additional experiment in Table R1, R2, Figure R1, Table 1, 2, 3) We clarify that our model demonstrates its ability to generalize to **7 types of adversarial attacks** including FGSM, PGD, CW, DeepFool, SPSA, LGV, and AutoAttack (Table R1, R2, Figure R1) and **16 types of common corruption perturbations** (Table 1, 2, 3) with diverse search space (NAS-Bench-201, DARTS) under both standard (Table 1 Left, 2) and adversarial training (Table 1 Right, R1, R2) settings. --- Reply to Comment 1.1.1: Title: Reminder: Please check our previous responses Comment: Dear Reviewer, We sincerely value the time and effort you have dedicated to reviewing our paper. We understand and appreciate the volunteer effort and time you’ve given to ensure a fair and constructive review process for NeurIPS. Given that **we have only 3 days remaining for further discussions**, we kindly request that you review our responses and the attached file. **To save you time, we have also provided a summarized response in our previous communication.** We are confident that we have addressed all of your concerns by providing additional experimental results and explanations. Therefore, we kindly ask you to incorporate these updates into your final review and score. Your consideration in this matter is highly appreciated. We eagerly look forward to your insightful comments and feedback. Best regards,\ Author
Summary: This work proposes a new zero-shot proxy to find robust NN architecture at initialization. The proxy utilizes the consistency of model features and gradients for clean and perturbed input. Experiments are conducted on robust NAS benchmarks. Performance is also provided for end-to-end NAS on DARTS search space. Strengths: 1. This work provides a novel and effective proxy for the search of robust NN model 2. The proposed method is well formulated and easy to follow 3. Results are provided on both NASBench and end-to-end search to show the effectiveness of the proposed method Weaknesses: 1. The theortical insight behind the proposed consistency score is unclear. As the score is only computed at the initialization of the model, it is unclear whether the proposed score is consistent for different random weight initialization, and if its correlation to the model robustness is affected by different adversarial training methods. More theortical analysis or ablation studies are needed to answer these questions 2. End-to-end NAS performance is an important metric to show the true effectiveness of the proposed method. However, in Tab.3 the results are limited to clean training and robustness against FGSM perturbation only. More final performance results with adversarial training and different types of perturbation is needed to verify the contribution. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The work is limited in theoretical insights of the proposed method. No potential negative socal impact is observed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1. The theoretical insight is unclear and the ablation experiments on different weight initialization methods and adversarial training methods are needed.** - The underlying theoretical insight of our proxy is premised on the notion that **a robust model should learn invariant useful features with respect to the clean and perturbed images [1,2,8,9]**. Based on this theoretical insight, we propose to design the proxy to measure the similarity of features, gradients, and parameters between the clean and perturbed inputs. - In response to your concerns regarding the compatibility of our proxy with various weight initialization methods, we conducted additional experiments with **Random initialization, Kaiming initialization [5], and Xavier initialization [6]** on the NAS-Bench-201 search space on CIFAR-10 in **Table R3 in the PDF file**. As demonstrated, **our proxy maintains a consistently higher correlation** compared to the baselines, **irrespective of the weight initialization methods employed**. Specifically, our proxy shows the highest average correlation of **0.568 and 0.399 for standard training and adversarial training scenario, respectively, while the best baseline only achieves 0.529 and 0.352** against diverse perturbations with the same random weight initialization. Since our approach considers the similarity vector of the parameters and gradients between the clean and perturbed image, Our superior performance can be achieved regardless of the initialization method. (*Detailed table with more baselines can be found in Table R1 and R3 in the PDF*) | |Standard-Trained| | | | |Adversarially-Trained| | | | | | | | | |-|-|-|-|-|-|-|-|-|-|-|-|-|-|-| | |Clean|FGSM|PGD|CC|Avg.|FGSM|PGD|CW|DeepFool|SPSA|LGV|AutoAttack|Avg.| |FLOPs|0.726|0.753|0.183|0.384|0.512|0.357|0.446|0.189|0.364|0.196|0.347|0.365|0.323| |SynFlow|0.777|0.778|0.163|0.396|0.529|0.369|0.442|0.202|0.397|0.196|0.387|0.383|0.339| |GradNorm|0.638|0.750|0.259|0.383|0.508|0.378|0.446|0.264|0.421|0.149|0.401|0.405|0.352| |NASWOT|0.660|0.511|-0.280|0.206|0.274|0.311|0.354|0.240|0.250|0.197|0.265|0.280|0.271| |CRoZe(Random)|0.823|0.826|0.188|0.436|**0.568**|0.441|0.532|0.220|0.454|0.240|0.449|0.458|**0.399**| |CRoZe(Kaiming)|0.812|0.818|0.189|0.430|0.562|0.428|0.512|0.217|0.443|0.227|0.426|0.436|0.384| |CRoZe(Xavier)|0.816|0.822|0.190|0.433|0.565|0.428|0.513|0.217|0.442|0.227|0.425|0.436|0.384| - With regard to adversarial training, unfortunately, **no benchmark datasets currently exist that contain robust accuracy of models that are trained with diverse adversarial training methodologies**, such as AT [3] and TRADES [4]. Therefore, it is challenging to illustrate dependencies on the type of adversarial training methods. However, based on the RobustBench [7], we can presume **different adversarial training methods may not affect the correlations** because the rank of performance holds even in diverse types of architectures (WideResNet 70-16, WideResNet 34-10, and ResNet18). Additionally, the performance ranking between our method and zero-cost proxy baselines remains consistent across different adversarial training types [3, 4], as shown in the following table. |Proxy Type|Adversarial Training Type|Clean|PGD-20|Rank| |-|:-:|:-:|:-:|:-:| |GradNorm|[3]|81.61|49.86|2| | |[4]|72.43|44.84|2| |SynFlow|[3]|77.08|45.95|3| | |[4]|59.87|36.07|3| |CRoZe|[3]|85.05|52.04|1| | |[4]|79.48|51.89|1| [1] Xia et al., Feature Denoising for Improving Adversarial Robustness, CVPR 2019 \ [2] Ilyas et al., Adversarial Examples Are Not Bugs, They Are Features, NeurIPS 2019 \ [3] Madry et al., Towards Deep Learning Models Resistant to Adversarial Attacks, ICLR 2018 \ [4] Zhang et al., Theoretically Principled Trade-off between Robustness and Accuracy, ICML 2019 \ [5] He et al., Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification, IEEE 2015 \ [6] Glorot et al., Understanding the difficulty of training deep feedforward neural networks, JMLR 2010 \ [7] Croce et al., RobustBench: a standardized adversarial robustness benchmark, NeurIPS 2021 \ [8] Goodfellow et al., Explaining and Harnessing Adversarial Examples, ICLR 2015 \ [9] Zhang et al., Understanding deep learning requires rethinking generalization, ICLR 2017 ---------- **W2. End-to-end performance is limited to clean training and robustness against FGSM attack.** - We appreciate your thoughtful response. In light of your comments, we have included additional experimental results that demonstrate the end-to-end NAS performance within the DARTS search space, where the searched architectures are **adversarially trained on CIFAR-10** following [1] and are subsequently evaluated against diverse perturbations including **PGD, CW, SPSA, LGV, and AutoAttack in Table R2 in the PDF file**. Our proxy, as substantiated by the results, shows **superior robustness, averaging robustness of 55.77%** against diverse perturbations, even with **15 times more efficient search cost** compared to the existing Robust NAS method (AdvRush). (*Detailed table can be found in Table R2 and Figure R1 in the PDF*) |NAS Type|Method|Clean|PGD-20|CW|SPSA|LGV|AutoAttack|Rob. Avg.|Search Cost (GPU sec)| |-|-|-|-|-|-|-|-|-|-| |Clean one-shot|DrNAS|86.45|54.66|11.39|85.73|79.82|52.40|55.60|46857| |Robust one-shot|AdvRush|85.98|53.89|6.68|80.85|79.61|51.88|55.57|251245| |Clean zero-shot|GradNorm|81.61|49.86|12.02|77.19|73.27|46.69|51.61|9740| |Clean zero-shot|SynFlow|77.08|45.95|26.50|75.78|74.14|42.45|52.96|10138| |Robust zero-shot|CRoZe|85.05|52.04|16.82|83.23|77.62|49.15|**55.77**|17066| [1] Madry et al., Towards Deep Learning Models Resistant to Adversarial Attacks, ICLR 2018 --------------------------------------------------- **Limitation. No potential negative social impact is observed.** - Since our approach does not bring any potential negative social impact, we neglect it as mentioned in the Neurips 2023 Author policy. --- Rebuttal Comment 1.1: Title: Gentle Reminder Comment: Dear Reviewer, This is a gentle reminder for the discussion period. We have addressed all of your initial concerns and provide additional experimental results in our previous responses. We sincerely hope to discuss our work and look forward to your constructive comments. For your convenience, we provide the **short summary** of our previous responses to your initial concerns. Detailed responses can be found in the previous responses. Best,\ Author --- > ### Summary >- **The theoretical insight is unclear.** > - The underlying theoretical insight of our proxy is premised on the notion that a robust model should learn invariant useful features with respect to the clean and perturbed images, which is motivated by previous works. (Please visit our initial response with a detailed explanation.) >- **Ablation study on weight initialization methods and adversarial training methods are needed.** > - (Shown through additional experiment in Table R3) Our proxy maintains a consistently higher correlation compared to the baselines regardless of the weight initialization methods. > - (In previous response) Based on the RobustBench [7] and our experimental results, we can presume different adversarial training methods may not affect the correlations. >- **End-to-end performance is limited to clean training and robustness against FGSM attack** > - (Shown through additional experiment in Table R2, Figure R1) Under adversarial training scenario, we demonstrate that our proxy can identify robust architectures with the **highest average robust accuracy of 55.77% against PGD, CW, SPSA, LGV, and AutoAttack with 15 times less search cost** compared to baseline. --- Rebuttal 2: Title: Thanks for the response Comment: I would like to thank the author for the detailed responses. The response resolves my concern with the stability and effectiveness of the proposed method. I will increase my rating to the paper to weak accept. One remaining question about the provided results on training with different adversarial training methods. As I understand the consistency between the performance ranking, I'm not sure why TRADES appears to be worse in both clean and robust accuracy than PGD for all the cases. Is this due to some hyperparameter choices? --- Rebuttal Comment 2.1: Comment: **Dear Reviewer,** We sincerely appreciate the time and effort you dedicated to reviewing our work. We are pleased to note that our responses have addressed your initial concerns. In light of your feedback, we adversarially trained the cell-based candidate architectures using the same hyper-parameter settings as stacked-based architectures, as referenced in [1]. However, it's worth noting that determining the optimal hyper-parameters for the adversarial training of cell-based architectures using the TRADES method [2] might differ from those of stacked architectures. This discrepancy can lead to reduced performance in both clean accuracy and robustness. It can be challenging to find the optimal hyper-parameters for cell-based architectures. Despite these challenges, we attempted hyper-parameter tuning in a manner similar to [1] for cell-based architectures using TRADES. We will keep you updated on our progress and share our results as soon as possible. Thank you once again for your invaluable feedback. Best regards,\ Author [1] Pang et al., Bag of Tricks for Adversarial Training, ICLR 2021\ [2] Zhang et al., Theoretically Principled Trade-off between Robustness and Accuracy, ICML 2019
Summary: This work introduces a lightweight proxy, CRoZe, designed to facilitate the development of Neural Architecture Search (NAS) based architectures that are robust across a diverse set of semantic-preserving perturbations. CRoZe operates by measuring consistency across the features, parameters, and gradients for a given clean image and its perturbed counterpart. This approach enables it to withstand a wide range of perturbations, unlike existing methods that focus on a specific set of perturbations (adversarial or Out-of-Distribution samples). Experimental results demonstrate that the proposed proxy can rapidly and efficiently search for neural architectures that exhibit consistent robustness against various perturbations across multiple benchmark datasets (CIFAR-10, CIFAR-100, ImageNet16-120) and diverse search spaces (NAS Bench201, DARTS). This performance significantly surpasses that of existing clean zero-shot NAS and robust NAS methods, all while reducing search costs. Strengths: * The focus on developing architectures that are robust against a *diverse range of perturbations* is novel and has not been studied earlier, presenting a wide range of potential applications. * The problem setting is well-motivated and effectively approached, with the introduction setting the context particularly well. * The experimental setup is solid, employing two benchmarks and three datasets. Weaknesses: * The choice of adversarial attacks -- FGSM and PGD -- seems somewhat outdated from the perspective of adversarial attacks. It would be beneficial to understand how the proposed method would perform with more recent adversarial attack methods such as [LGV](https://arxiv.org/abs/2207.13129), [SPSA](https://arxiv.org/abs/1802.05666) etc. * The same concern applies to the set of experiments where adversarial training is used. The choice of $L_\infty$ PGD attacks may not be very contemporary, and exploring more recent approaches could enhance the paper's relevance and applicability. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How does the proposed method perform when confronted with novel types of perturbations that were not considered during the architecture search? Is there a mechanism to make the method more adaptive to new perturbations? (This question is related to Weakness 1) 2. Is there a specific reason for choosing a relatively older search space like DARTS, especially when there are more recent search spaces available? (Please refer to NAS Bench [NAS Bench](https://www.automl.org/nas-overview/nasbench/), [$A^{3}D$: A Platform of Searching for Robust Neural Architectures and Efficient Adversarial Attacks](https://arxiv.org/pdf/2203.03128)) 3. Comparison with robust ensemble-based methods: Ensemble-based methods in NAS are often known to be more robust. Does the proposed proxy support ensemble-based approaches? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitation and Broader impact are discussed in the supplementary Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1. Evaluation against recent adversarial attacks is needed (i.e., LGV, SPSA).** * Thanks for your comment, we provide additional experimental results on recent adversarial attacks such as CW, DeepFool, SPSA, LGV, and AutoAttack by evaluating Spearman’s rank correlation on the NAS-Bench-201 search space where each candidate architecture is adversarially trained with PGD-7 on CIFAR-10. * For evaluation, we employ CW and DeepFool attack with 50 steps, SPSA with $\epsilon$=8/255 and a single iteration, LGV with $\epsilon$=4/255, $\alpha$=4/255/10, and 10 steps, and AutoAttack with $\epsilon$=8/255. * As evidenced by the correlation in the ‘Avg.’ column, our proxy consistently **shows the highest overall correlation of 0.399 for robust accuracies** even against recent adversarial attacks, while the best baseline (GradNorm) only achieves 0.352. This suggests that our proxy is capable of identifying neural architectures that are robust against a wide range of adversarial attacks. (Detailed table can be found in Table R1 in PDF) |Proxy Type|CW|DeepFool|SPSA|LGV|AutoAttack|Avg.| |-|-|-|-|-|-|-| |GradNorm|**0.264**|0.421|0.149|0.401|0.405|0.352| |NASWOT|0.240|0.250|0.197|0.265|0.283|0.271| |CRoZe|0.220|**0.454**|**0.240**|**0.449**|**0.458**|**0.399** | ------------------ **W2. The choice of PGD attacks may not be very contemporary.** * The main reason for using adversarial training is that it is the most straightforward way to attain a robust model for a reliable robust evaluation. To address your concern, we additionally provide the experimental results on the NAS-Bench-201 search space (our response to W1, **Table R1**) and the DARTS search space on CIFAR-10, presented in **Table R2** in the PDF file. We assess the robustness of the architectures identified by each method against recent adversarial attacks, including CW, DeepFool, LGV, SPSA, and AutoAttack. Our proxy underscores its ability to discover neural architectures demonstrating **the highest average robustness of 55.77% against various attacks with 15 times less search cost** compared to AdvRush. (Detailed table can be found in Table R2 in PDF) ||Type|Search Cost (GPU sec)|Avg. Rob.| |-|-|-|-| |DrNAS|Clean one-shot|46857|55.60| |AdvRush|Robust one-shot|251245|55.57| |GradNorm|Clean zero-shot|9740|51.61| |SynFlow|Clean zero-shot|10138|52.96| |CRoZe|Robust zero-shot|17066|**55.77**| ------------- **Q1. How does the CRoZe perform against novel types of perturbations?** * Experimental results against novel attacks can be found in response to W1, W2. * Our theoretical insight is derived from that a robust model can learn invariant features with respect to clean and perturbed images regardless of the type of perturbations applied to input [1,2,3]. On top of this motivation, our proxy mainly focuses on the consistency of features, parameters, and gradients between clean and perturbed inputs without targeting a specific type of perturbation for assessing the robustness of the given neural architecture. * Due to properties not limited by perturbation types, our proxy works well with any kind of perturbed inputs such as Gaussian Noise or adversarial examples (Table 4). Furthermore, our proxy demonstrates the ability to handle novel types of perturbation not considered during the search (Table 1,2). [1]Xia et al., Feature Denoising for Improving Adversarial Robustness, CVPR 2019\ [2]Ilyas et al., Adversarial Examples Are Not Bugs, They Are Features, NeurIPS 2019 \ [3]Goodfellow et al., Explaining and Harnessing Adversarial Examples, ICLR 2015 ------------ **Q2. Is there a reason for choosing an older search space like DARTS?** * We choose the DARTS search space because 1) for a fair comparison with prior works [1,2] and 2) it is the largest search space for image classification tasks as shown in the following table. Besides, please note that we also conducted experiments on NASBench-201 in Table 1, and 2. |Benchmark|NAS-Bench-101|NAS-Bench-201|NAS-Bench-301|NAS-Bench-Macro|DARTS| |-|-|-|-|-|-| |Size|423k|6k | $10^{18}$ |6k| $10^{18}$ | * Furthermore, the search spaces (NASBench) that you have mentioned are, in fact, encapsulated within NAS-Bench-101, NAS-Bench-201, and NAS-Bench-301, differing only in their use of metrics for neural architectures outside of validation accuracy. In NAS-Bench-301, each neural architecture is defined with the configuration of the DARTS search space which eventually aligns with the DARTS search space. In addition, the $A^3$D search space is a subset of the DARTS search space with fewer operation choices, so compared to $A^3$D, we employ a wider range of operations, including sep_con_7x7 and dil_con_7x7. [1]Chen et al., DrNAS: Dirichlet Neural Architecture Search, ICLR 2021\ [2]Mok et al., AdvRush: Searching for Adversarially Robust Neural Architectures, ICCV 2021 ------------ **Q3. Comparison with robust ensemble-based methods.** * If we understand correctly, we assume that you might be referring to [1], which is able to orthogonally use in our approach. We have carried out additional experiments with ensemble-based NAS. We estimate the robustness by ensembling zero-cost proxies on the NAS-Bench-201 search space across CIFAR-10, CIFAR-100, and ImageNet16-120. As depicted in Table, our proxy exhibits an improved correlation. This suggests that the robust evaluation for given neural architectures can be significantly enhanced by ensemble integration with our proxy. (Detailed table can be found in Table R6 in PDF) | |CIFAR-10||||CIFAR-100||||ImageNet16-120||| |-|-|-|-|-|-|-|-|-|-|-|-| ||Clean|FGSM|PGD|CC|Clean|FGSM|PGD|CC|Clean|FGSM|PGD| |CRoZe|0.823|0.826|0.188|0.719|0.784|0.786|0.343|0.765|0.765|0.596|**0.707**| |Ensemble|0.803|0.681|0.543|0.771|0.793|0.739|**0.444**|0.798|0.749|0.638|0.296| |CRoZe+Ensemble|**0.894**|**0.872**|**0.633**|**0.894**|**0.894**|**0.851**|0.415|**0.878**|**0.810**|**0.688**|0.259| [1]Adelfattah et al., Zero-Cost Proxies for Lightweight NAS, ICLR 2021 --- Rebuttal Comment 1.1: Title: Gentle Reminder Comment: Dear Reviewer, This is a gentle reminder for the discussion period. We have addressed all of your initial concerns and provide additional experimental results in our previous responses. We sincerely hope to discuss our work and look forward to your constructive comments. For your convenience, we provide the **short summary** of our previous responses to your initial concerns. Detailed responses can be found in the previous responses. Best,\ Author --- > ### Summary >- **Evaluation of adversarially trained models against recent adversarial attacks on NAS-Bench-201 search space and DARTS search space on CIFAR-10.** > - (Shown through additional experiment in Table R1) Our method achieves **the highest overall correlation of 0.399 for average robust accuracies (CW, DeepFool, SPSA, LGV, and AutoAttack)** while the best baseline shows 0.352. > - (Shown through additional experiment in Table R2, Figure R1) Our proxy demonstrates **the highest average robustness of 55.77%** against various attacks (CW, LGV, SPSA, and AutoAttack) with **15 times less search cost** compared to baseline. >- **Explanation on how CRoZe can handle novel types of perturbations.** > - Our proxy identifies architectures that maintain consistent features, gradients and parameters across diverse perturbations, making them suitable for any type of perturbations. (Please visit our initial response with a detailed explanation.) >- **The reason for using DARTS search space.** > - Because 1) existing baselines use DARTS search space, and 2) it is the largest search space for image classification tasks. >- **Compatibility with ensemble-based NAS.** > - (Shown through additional experiment in Table R6) We demonstrate that our proxy can improve the correlation for clean and robust accuracies on diverse tasks when our proxy is incorporated to ensemble-based NAS on NAS-Bench-201 search space. --- Reply to Comment 1.1.1: Title: Gentle Reminder Comment: Dear Reviewer, **With just 1 day left in our discussion period**, we politely ask you to review our recent responses and the attached file showcasing the additional experimental results you requested. For your convenience, **a summary also has been provided in our previous response.** We truly appreciate your commitment to the review process and await your valuable feedback. Warm regards,\ Author --- Rebuttal Comment 1.2: Comment: I have carefully reviewed the initial submission, the authors' response, and the feedback from other reviewers. I appreciate the effort that has been invested in addressing the concerns raised, and I would like to thank the authors for exhaustive set of experimental results. The responses provide answerers to my questions, and in light of this, I have updated my rating accordingly. --- Reply to Comment 1.2.1: Title: Thanks for your reply Comment: Dear Reviewer, We sincerely appreciate the time and effort you've dedicated to reviewing our work.\ Your positive comments and detailed concerns have greatly enhanced the quality of our work. Best regards,\ Author
Rebuttal 1: Rebuttal: Dear Reviewers, We deeply appreciate the time and effort you have invested in reviewing our paper. During the initial response period, we did our best to address all the concerns you raised in the response. Moreover, **we have thoughtfully included the additional experimental results that you requested in the attached PDF file**. We kindly request your thorough consideration of our responses and the additional experiments in the PDF. For your convenience, we have provided tables summarizing the contents, corresponding to each reviewer. We hope that our revisions have fully resolved your concerns, and we kindly request that you consider reflecting these changes in your updated review scores. Best,\ Authors --- ### **Summary** > **Reviewer m6P2** > - Evaluation of adversarially trained models against recent adversarial attacks (Table R1, Table R2, Figure R1) > - Compatibility with ensemble-based NAS (Table R6) --- > **Reviewer DQhw** > - Evaluation with different weight initialization methods (Table R3) > - Influence of different types of adversarial training (Response) > - End-to-end NAS performance against diverse adversarial attacks (Table R2) --- > **Reviewer 41nW** > - Evaluation against diverse adversarial attacks (Table R1) > - Comparison with human-made architectures (Table R4) > - Evaluation against AutoAttack (Table R2) > - Verification the generalizability to diverse perturbations (Table R1, R2, 1, 2, 3) --- > **Reviewer KEHv** > - Comparison with recent train-free NAS works (Table R5) > - Experimental results on final clean and robust accuracy (Table R2, 3) --- ### **Contents in PDF file** * **[Table R1]**: Evaluation of adversarially trained models against diverse adversarial attacks on NAS-Bench-201 search space * Reviewer m6P2 (Weakness 1,2) * Reviewer 41nW (Weakness 1,2) * **[Table R2/Figure R1]**: End-to-end performance of adversarially trained models against diverse adversarial attacks on DARTS search space * Reviewer m6P2 (Weakness 2) * Reviewer DQhw (Weakness 2) * Reviewer 41nW (Weakness 3) * **[Table R3]**: Comparisons with diverse weight initialization methods of our proxy * Reviewer DQhw (Weakness 1) * **[Table R4]**: Comparisons with human-made architectures * Reviewer 41nW (Weakness 3) * **[Table R5]**: Comparisons with more recent train-free NAS baselines * Reviewer m6P2 (Question 3) * **[Table R6]**: Compatibility with ensemble-based NAS * Reviewer KEHv (Weakness 2) Pdf: /pdf/dc73c4431077d96078a63b5fef34dbb27ff3874f.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Definition of Continual Reinforcement Learning
Accept (poster)
Summary: The paper looks at developing a foundation for continual reinforcement learning (CRL). The authors develop definitions and insights, aiming to formalize the intuitive concepts in continual learning fields. They also provide two examples of CRL to illustrate the difference between traditional RL and CRL, which is viewing learning as endless adaptation instead of finding a solution. Strengths: 1, The concept of CRL as agent designer aiming to identify an optimal agent among available agents is innovative. 2, The use of rigorous math tools and definition in this paper could provide valuable insight into continual learning and potentially inspire future CRL algorithms development. Weaknesses: 1, Though it's not the focus and goal of this paper, the idea of agent basis contains redundancies and may hinder the transformation of it into a memory and computation efficient CRL algorithm. 2, The training process of agent basis is unclear in Figure 2 (b). I guess two examples are presented to show the effectiveness of CRL, but it's not surprising that CRL can beat RL without adaptation in these changing envs. More online learning benchmarks would help to achieve a fair comparison. 3, The paper is not well structured, and only few related works are mentioned. Continual Learning is discussed in many fields, with different names (e.g., online learning, few-shot adaptation, adaptive control), it would be valuable to provide a broader literature review. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The high-level idea of CRL training is understandable, but I am unsure about the training details in Section 4.2 CRL examples. Including a pseudocode description could clarify the training process and help understand the implementation details. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Overall, the paper requires a lot of additional work to become publishable. Adding figures and pseudocode would enhance readability and understanding. Furthermore, conducting more experiments to demonstrate the effectiveness of the proposed CRL idea would strengthen the paper's contributions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First, we would like to thank the reviewer for reviewing our paper. We address the reviewer’s primary questions and concerns below, and will plan to update the paper in line with the reviewer’s suggestions. > 1, Though it's not the focus and goal of this paper, the idea of agent basis contains redundancies and may hinder the transformation of it into a memory and computation efficient CRL algorithm. Can you say more here? We do not understand the comment. It is unclear in what sense a basis contains redundancies. We show in Theorem 5.1.4+5.1.5 that it is a necessary property of every CRL problem that the design space of agents contains redundancies in a particular sense (with proofs provided in Appendix C). We suggest that, on a certain view, this might be unsurprising as the space of agents generated by a fixed agent basis will likely contain agents that are similar in their search through the agent basis. Critically, this redundancy need not apply to an agent basis. And, it is unclear to us what is meant by “...may hinder the transformation of it into a memory and computation efficient CRL algorithm”. If the reviewer could clarify we are happy to discuss further. > 2, The training process of agent basis is unclear in Figure 2 (b). I guess two examples are presented to show the effectiveness of CRL, but it's not surprising that CRL can beat RL without adaptation in these changing envs. More online learning benchmarks would help to achieve a fair comparison. The examples in Section 4.2 are intended to illustrate that our definition of continual RL can directly accommodate standard instances of continual learning—the first case is an instance of the typical multi-task or lifelong RL setting often used in prior work (see, for instance, Wilson et al. 2007 or Brunskill and Li 2014), and the second case is exactly continual supervised learning as described in the survey by Mai et al. 2022. In the first example, the two agents deployed are both tabular Q-learning, where the only difference is the choice of the annealing schedule for the step-size parameter. In the “$\alpha=C$ case, the step-size is a fixed constant, whereas in the $\alpha=$anneal case, the step-size is annealed to zero in the limit. > 3, …only few related works are mentioned. Continual Learning is discussed in many fields, with different names (e.g., online learning, few-shot adaptation, adaptive control), it would be valuable to provide a broader literature review. We are happy to include a broader and deeper discussion of related work on continual learning. We will plan to focus this discussion on the explicit relationship between our definition and past approaches to continual RL from the work by Ring and the recent continual RL survey by Khetarpal et al.. As mentioned in our reply to all reviewers, the work by Ring emphasizes the generality of the environment model and the reward function, while the survey by Khetarpal et al. focuses explicitly on non-stationary MDPs, similar to typical work on multi-task RL. We will include discussion showing the sense in which we accommodate and extend both of these views. > The high-level idea of CRL training is understandable, but I am unsure about the training details in Section 4.2 CRL examples. Including a pseudocode description could clarify the training process and help understand the implementation details. Can the reviewer clarify which details would be helpful to hear more about? Both algorithms are simply tabular Q-learning with different step-size annealing schedules. We are more than happy to include pseudocode (space permitting, or in the appendix) if that would help. --- Rebuttal Comment 1.1: Comment: Thanks the authors for their thorough response and it really helps my understanding. My concern was that the agent needs to switch between the a set of base agents and this set can be large and needs more training power. Like sec 4.2 CRL example, we have four policies to train and then learn a particular agent by searching over the base agents. Please let me know if I get it right. --- Reply to Comment 1.1.1: Title: Response to Comment Comment: We thank the reviewer for following up, and we are glad to hear our response has helped. > My concern was that the agent needs to switch between the a set of base agents and this set can be large and needs more training power. Like sec 4.2 CRL example, we have four policies to train and then learn a particular agent by searching over the base agents. Please let me know if I get it right. To clarify, when we talk about agents switching between elements of an agent basis, we do so in a general way: we mean that _every agent_, including all of those reinforcement learning algorithms we have designed and implemented to date, can be understood in this way. For instance, consider DQN (Mnih et al. 2015). The Q-network maintained by this algorithm at any point of time can be thought of as the currently active element of the basis. Then, through learning from experience, the algorithm updates the parameters of this Q-network, thereby updating the currently active element of the basis. In this way, all assignments of parameters to the network is the agent basis (the set), and any element of this set is one of the possible elements of the basis (a specific Q-network, and thus, a policy). In the example in section 4.2, there are not four policies to train---both agents (standard Q-learning and Q-learning with a fixed step-size parameter) search through the space of Q functions, roughly. As a result, we do not quite understand the point that that an agent needs more training power, as the above perspective applies to all agents. We do agree that continual RL problems are likely to be _difficult_ learning problems in general: learning to adapt endlessly is likely _harder_ than learning to find a single solution to a problem in many cases. We believe it is important for the community to embrace and confront this difficulty, which is why we think it is useful to carefully define the problem. Does that help? We are happy to help clarify or discuss further.
Summary: In this paper, the authors develop a simple mathematical definition of the continual RL problem. These definitions, insights, and results formalize many intuitive concepts at the heart of continual learning, and may open new research pathways surrounding continual learning agents. Strengths: The mathematical definition of continual reinforcement learning give us some inspirations when thinking of the continual learning, multi-task RL, continual supervised learning, and so on. Weaknesses: 1.The abstract is too simple that readers cannot get enough information and key ideas from it. 2.This paper is not easy to follow, for the overall mathematical definitions and analysis. I hope the authors can give us more cases or experiments using the definitions proposed in this paper to highlights the benefits of following such definitions. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1.What information can we get from Figure 2(a) Switching MDP Visual? Do authors just want to emphasis “Each underlying MDP shares the same state space and action space, but varies in transition and reward functions”? 2.How can we prove or evaluate the definitions, the proposed operators, and so on? 3.Can authors add some ablation studies in this paper? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Please modify the paper as the weaknesses and questions above, and add limitation analysis in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First, we would like to thank the reviewer for reviewing our paper. We address the reviewer’s primary questions and concerns below, and will plan to update the paper in line with the reviewer’s suggestions. > 1.The abstract is too simple that readers cannot get enough information and key ideas from it. We had originally thought a simple abstract would be a good choice, but in hindsight, we agree with the reviewer and will modify it accordingly. We will plan to change our abstract to the one mentioned in the rebuttal reply to all reviewers, and are open to suggestions for improvement. > 1.What information can we get from Figure 2(a) Switching MDP Visual? Do authors just want to emphasis “Each underlying MDP shares the same state space and action space, but varies in transition and reward functions”? Yes, that is correct; we would like to give the reader a clear sense of the learning problem by emphasizing that each individual MDP shares a state-action space, but differs in its other components. > 2.How can we prove or evaluate the definitions, the proposed operators, and so on? We believe that our definitions and operators should (1) capture intuitions and easily accommodate known cases of continual learning, (2) hold up under scrutiny, and (3) provide a language for carefully discussing important phenomena related to continual learning. We believe that our definition and the operators accomplish these three feats. For instance, our CRL definition formalizes the intuition that “the best agents keep learning forever”, and the two examples we provide in Section 4.2 showcase that our definition can accommodate known special cases. Moreover, the necessary properties we establish of the CRL definition and the operators (Theorem 5.1, Theorem 5.2, Theorem 5.3) provide initial character that goes beyond the contents of the definitions themselves. For example, in defining the generates operator, it may not be obvious that deciding whether a given basis generates an agent set in an environment is undecidable–Theorem 5.2 proves this is in fact the case. In this sense, we believe our definition and operators satisfy the three above criteria (capture intuitions and accommodates known cases, holds up under scrutiny, and provides a language for discussing phenomena related to continual learning). > 3. Can authors add some ablation studies in this paper? It is unclear to us what exactly an ablation would be in this case, and to what end we need one. Can you please elaborate?
Summary: The paper proposes a mathematical formulation for the problem of continual reinforcement learning in an infinite horizon setting. Strengths: 1. The authors propose a new mathematical formalism for continual RL. 2. The paper can be of interest to mathematically inclined readers and could potentially lead to theoretical results in continual reinforcement learning. Weaknesses: 1. Abstract: there already are foundations for CRL. You cited a survey paper [16] and multiple other existing works. As unconventional as it is, I wouldn’t be against a short abstract, but I do not believe the current one will do. 2. Missed important opportunities to connect your formalism with existing (continual) RL. E.g. eq 2.1 states an infinite horizon, which is required for example in Theorem 3.1, but the very standard “infinite horizon” vocabulary never appears. I think the paper is not connected enough to existing (continual) RL literature. 3. Generally, I’m not convinced by motivations. The paper says it is important, but I did not really see a convincing motivation for abstracting the problem of continual RL. The theorems provide results that can be conveyed in significantly easier ways with existing formulations: e.g. non commutativity in thm 5.2 signifies that in continual RL, some policy transitions cannot be “unlearnt” in some settings. The formalism incurs a very significant overhead over such high level language – I believe that the authors can improve the motivations to justify why their formalism is worth it. **Typos and suggestions:** 1. L10: we instead of We 2. L41: I would use $\mathcal{Z}$ instead of $\mathcal{X}$ so it flows better in the following sentence. 3. L84: phrasing “instead opt for simply the bounded…” 4. L93: did you mean the sum of rewards to start at index 0 ? or t+1, t+2, etc. 5. L95: I know it got absorbed into v, but I would still prepend this line with “Given <initial conditions> $h_0$, any tuple…” since your work aims to formalise rigorously the problem. 6. “We use as if in the sense of the positive economists” this is terminology that should be defined in the paper. 7. L334: “generates is commutative” -> are 8. L338: I recommend not defining objects used in a main body theorem in the appendix. 9. I believe that the insights, as stated in the conclusion, are trivial; perhaps “insight” isn’t the word that should be used (using the Cambridge dictionary definition of “insight” “ a clear, deep, and sometimes sudden understanding of a complicated problem or situation” – I do not believe the idea for example that agents in CRL either can or cannot find a set of policies/behaviours is insightful). Technical Quality: 2 fair Clarity: 3 good Questions for Authors: No. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First, we would like to thank the reviewer for their time and energy in reading and reviewing our paper. We address the reviewer’s primary questions and concerns below, and will plan to update the paper in line with the reviewer’s suggestions. > Abstract: there already are foundations for CRL. You cited a survey paper [16] and multiple other existing works. As unconventional as it is, I wouldn’t be against a short abstract, but I do not believe the current one will do. This is a great point, and we are happy to extend our abstract in light of the feedback. Our proposed abstract draft is provided in the reply to all reviewers ("[Overall Response]") > Missed important opportunities to connect your formalism with existing (continual) RL. E.g. eq 2.1 states an infinite horizon, which is required for example in Theorem 3.1, but the very standard “infinite horizon” vocabulary never appears. I think the paper is not connected enough to existing (continual) RL literature. We agree, and recognize the importance of discussing the relationship between our formalism and prior approaches to continual RL. We will include a discussion that elucidates these relationships further, with a focus on the thesis by Ring and the recent survey by Khetarpal et al. Further details of this relationship are mentioned in the "[Overall Response]" to all reviewers. > Generally, I’m not convinced by motivations. The paper says it is important, but I did not really see a convincing motivation for abstracting the problem of continual RL... I believe that the authors can improve the motivations to justify why their formalism is worth it. We believe that precise definitions are essential for clarity in science. For example, prior to the discovery of pseudo-randomness, much work in theoretical computer science relied on ad-hoc heuristics for producing random sequences to the detriment of the effectiveness of these approaches. Then, a series of papers carefully defined what it would mean for a sequence to be pseudo-random (for instance, in “The definition of random sequences” by P. Martin-Lof, and “Theory and applications of trapdoor functions” by Yao). On Yao’s view, a pseudo-random sequence is one that a bounded adversary cannot distinguish from truly random. Achieving this clarity allowed for pseudo-random number generators to be developed, and for the community to direct its research at the right concepts, and toward the right objectives. In our view, continual reinforcement learning is at a similar stage in its scientific development to pseudo-randomness in its infancy: we lack precise definitions for what, exactly, it is we are conceptualizing. Some people think of non-stationarity, others think of computational constraints, while others think of multi-task learning and transfer. Our goal in developing a single abstract definition is to encompass all of these views as special cases, and the point of formalizing it is to remove any ambiguity in the use of terms like “continual learning”, or “continual learning agent”. We can include more of this motivation in the paper, and are happy to discuss further. > I believe that the insights, as stated in the conclusion, are trivial; perhaps “insight” isn’t the word that should be used… I do not believe the idea for example that agents in CRL either can or cannot find a set of policies/behaviours is insightful. Thanks for the feedback here, and we are happy to consider alternative words to replace “insight”. However, we would like to clarify one point, and justify another. First, to clarify: the second insight does not state an agent in CRL can or cannot find a set of policies. This does not reflect the semantics of Remark 3.2. The remark says that _every_ agent can be classified into exactly one of two families: (1) the agent eventually stops its search over its corresponding basis, or (2) searches forever. This is regardless of the learning setting, environment, or reward function. The generality and precision of the statement is why we believe it is insightful. Second, to justify further why we chose the term “insight”: we provide a formalism of an intuitive fact that holds true of every possible agent in a way that, in hindsight, the fact seems clear and even obvious. Even if the intuition is obvious (in hindsight), knowing how precisely to state this fact rigorously in a way that holds true of every possible agent is non-trivial (in our opinion). In this sense, we take the precise formalisation of the intuition to be insightful. Moreover, the first insight (conveyed through Theorem 3.2), we take to be the stronger of the two. We are happy to discuss further, and if the reviewer is unswayed by the above, we are happy to change the term “insight” to something weaker. > L95: I know it got absorbed into v, but I would still prepend this line with “Given <initial conditions> ℎ0, any tuple…” since your work aims to formalise rigorously the problem. Good point, fixed! > “We use as if in the sense of the positive economists” this is terminology that should be defined in the paper. We are happy to include an expanded description of this terminology in the paper. > L338: I recommend not defining objects used in a main body theorem in the appendix. This is a fair point. We will move the definition of “always reaches” to the main body of the paper. > Other Typos and Writing suggestions: Thanks! We have fixed the remaining suggested typos and writing suggestions. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the extensive replies to every reviewer and acknowledging the importance of rethinking the abstract and litterature/contextualising. > We believe that precise definitions are essential for clarity in science. For example, prior to the discovery of pseudo-randomness, much work in theoretical computer science relied on ad-hoc heuristics for producing random sequences to the detriment of the effectiveness of these approaches. Then, a series of papers carefully defined what it would mean for a sequence to be pseudo-random (for instance, in “The definition of random sequences” by P. Martin-Lof, and “Theory and applications of trapdoor functions” by Yao). On Yao’s view, a pseudo-random sequence is one that a bounded adversary cannot distinguish from truly random. Achieving this clarity allowed for pseudo-random number generators to be developed, and for the community to direct its research at the right concepts, and toward the right objectives. In our view, continual reinforcement learning is at a similar stage in its scientific development to pseudo-randomness in its infancy: we lack precise definitions for what, exactly, it is we are conceptualizing. I agree, as someone who often has to be on the other end of this argument. However, in my opinion, the less accessible a formalism is (and I do believe the one presented here is on the more abstract end of the spectrum), the more burden there is to motivate it. For example, Balduzzi et al.'s *The Mechanics of n-Player Differentiable Games* was a highly impactful paper not only due to its simplicity, but also for proposing in the paper new analyses using their formalism. I understand that the problem here may warrant a more complex formalism, but I still believe proposing something using that formalism is important. > Some people think of non-stationarity, others think of computational constraints, while others think of multi-task learning and transfer. Our goal in developing a single abstract definition is to encompass all of these views as special cases, and the point of formalizing it is to remove any ambiguity in the use of terms like “continual learning”, or “continual learning agent”. That's a unrelated to your submission but 1) I believe *non-stationarity* already captures multi-task learning and transfer, and 2) are those terms really ambiguous ? I have not experienced much disagreement over the definitions of CL that has somehow impeded discussions/led to misunderstandings. > First, to clarify: the second insight does not state an agent in CRL can or cannot find a set of policies. This does not reflect the semantics of Remark 3.2. The remark says that every agent can be classified into exactly one of two families: (1) the agent eventually stops its search over its corresponding basis, or (2) searches forever. I might be missing something re: a third possible outcome. The way I'm reading this is "either A or not A" where A = "the agent eventually stops its search". Again, I assume I'm missing something, so could the authors clarify this point further ? Also, I believe what I stated is the loose equivalent of that, under equating agent with policy, and assuming the search stopping = "finding", but I may very well be wrong. > Second, to justify further why we chose the term “insight”: we provide a formalism of an intuitive fact that holds true of every possible agent in a way that, in hindsight, the fact seems clear and even obvious. Even if the intuition is obvious (in hindsight) While I understand the point of having it stem from the formalism, and you might argue the following is semantics, but it's more foresight than hindsight. The formalism is developed (just like in natural sciences) with the goal of matching current knowledge of CL, but capturing them with a more rigorous formulation. After all, that is the motivation of the paper. --- Reply to Comment 1.1.1: Title: First Response Comment: We thank the reviewer for their thoughtful reply. We have two immediate reactions: > I might be missing something re: a third possible outcome. The way I'm reading this is "either A or not A" where A = "the agent eventually stops its search". We now believe we agree here: the remark is pointing out a claim of the form "either A or not A", exactly as you suggest, and A = "the agent eventually stops its search", as you suggest. Our original comment was indicating that we believed there was misunderstanding on what A was ("finding a set of policies" vs. "agent stopping its search"). We believe that we are all now on the same page. And, regarding the the choice of the term "insight": > While I understand the point of having it stem from the formalism, and you might argue the following is semantics, but it's more foresight than hindsight. The formalism is developed (just like in natural sciences) with the goal of matching current knowledge of CL, but capturing them with a more rigorous formulation. After all, that is the motivation of the paper. Thanks, that resonates. Our main points here are: that: (1) formalizing intuitions about _all_ agents into rigorous statements that reflect common knowledge and intuitions is non-trivial, and (2) the first "insight" (Theorem 3.1) is really intended to be the more "insightful" of the two. We want to emphasize that we are more than happy to change the paper to move away from the term "insight", as we appreciate and understand where the reviewer is coming from with this point.
Summary: - This paper lays out a foundation for continual reinforcement learning (CLR) from the ground up—establishing definitions for the purpose of building towards a technical definition of CRL itself, proving various properties of CRL, and through employing simple CRL examples, demonstrating some of these properties. Strengths: - The writing is exceptionally clear and precise. - The framework is well thought-out; the definitions are clear and useful, and the framework as a whole is both technical and intuitive. Importantly the provided definition of CRL seems to get at the root of what researchers have always vaguely meant by CRL. - There is a rich foundation set for future work (e.g. exploring “connections between our formalism of continual learning and some of the phenomena at the heart of recent empirical continual learning studies, such as plasticity loss, in-context learning, and catastrophic forgetting”; line 364). Weaknesses: - I understand the appeal of a short abstract (i.e. it is to the point, and (albeit somewhat pompously) implies some special finality or importance to the contributions in the work), but the resolution of the abstract, for good reason, usually sits between the resolution of the title and the paper itself. The abstract in this case does not provide much more information than the title, so any reader that wants a quick overview of the paper must read the paper itself, defeating the purpose of the abstract. I suggest the authors to reconsider employing such a short abstract. - I feel that the paper could have developed more connections to previous literature, performed more experiments building off of this framework, etc., i.e. more validation for why we should care about this framework. Better (nonstationary) environments to evaluate agents on? Agent structures? Assessment of how well current methods fit into/exploit elements of this framework? The framework is nice, but it would be useful to see how it can situated into current literature, either by retroactively analyzing current agents/environments w.r.t. this framework, or designing new agents/environments from the principles this framework provides. - Maybe more real-world examples of changing environments? Or where we want the behavior to change in the same environment (but then we just have an oscillating agent…). Something about the environment changing (or being better understood over time) is fundamental to continual RL) - It took me a number of readings to understand agent bases; i.e. the relationship between $\Lambda$ and $\Lambda_B$. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Do we really want agents that will adapt their policy forever? Or are the environments we care about simply non-stationary, i.e. because the world changes, and the agent is required to learn more about the environment in order to continue to act optimally? Maybe the policy (in some high-level sense) should remain the same, and it is the world model that should be adapted? E.g. suppose the expressed behavior of the agent policy can be boiled down to “helping humans”, and this policy is fixed, given the agents current understanding of the world as context. Perhaps only this context needs to be updated, i.e. as the world changes, and/or the agent learns more about the existing world. This seems to fit within the proposed framework, if we consider the inferred world model as part of the agent. And while it does seem likely that the discovery of new contexts would require updating the policy, it still appears to me to be beneficial to disentangle these aspects explicitly. What are the authors' thoughts on this? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: - The authors have discussed the limitations thoroughly, and I have listed them above as a strength as there are many directions ripe for future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First, we would like to thank the reviewer for their time and energy in reading and reviewing our paper. We address the reviewer’s primary questions and concerns below, and will plan to update the paper in line with the reviewer’s suggestions. > The abstract in this case does not provide much more information than the title, so any reader that wants a quick overview of the paper must read the paper itself, defeating the purpose of the abstract. I suggest the authors to reconsider employing such a short abstract. We had originally thought a simple abstract would be a good choice, but we hear the reviewers' suggestion, and will modify it accordingly. We will plan to change our abstract to the one listed in the rebuttal comment provided to all reviewers (see "[Overall Response]" above). We are happy to make any further changes based on your feedback. > I feel that the paper could have developed more connections to previous literature, This is a valid point, thanks! We have two reactions: First, in light of this suggestion, we will include additional text discussing explicit connections with previous approaches to continual reinforcement learning, with an emphasis on the work by Ring and the recent continual RL survey by Khetarpal et al. Succinctly, the work by Ring emphasizes the generality of the environment model and the reward function (which we also adopt), while the survey by Khetarpal et al. focuses explicitly on non-stationary MDPs, similar to typical work on multi-task RL (as in work by Brunskill and Li, for instance). Our definition directly accommodates and extends both of these views, and we are happy to include discussion on this topic in the paper. Second, one of the primary goals of this paper is to promote a change of mindset—popular thinking in RL tends to view the learning problem as the search for one policy (typically, the optimal policy of an MDP). However, in some cases it is better to think of learning as endless adaptation. We suspect this change in perspective will be increasingly important as agents are deployed in the real world, which is notoriously messy and changes regularly. While this shift in mindset is not new, we do believe we offer a concrete way to conceptualize of this change in perspective that can help promote new RL research aligned with the problem facing agents we are building today. > …performed more experiments building off of this framework, etc., i.e. more validation for why we should care about this framework. Better (nonstationary) environments to evaluate agents on? Agent structures? Regarding additional validation, evaluation, and environments: we agree with the spirit of this point, and intend to carry out further work motivated by the definition. However, we do believe that the essence of this paper is about carefully defining the problem, and that further evaluation of the kind suggested is best deferred for follow-up work to give it the care and space it deserves. > Q: Do we really want agents that will adapt their policy forever? Or are the environments we care about simply non-stationary, i.e. because the world changes, and the agent is required to learn more about the environment in order to continue to act optimally? This is another great point, and one we explored in depth as part of this work. We believe it is an open question as to whether the two concepts (policy adaptation vs. learning more about the environment) are different, deeply connected, or the actually same concept just viewed from different lenses. Roughly, we can see that an agent that must learn more about its environment to act optimally but does not ever adapt its policy is a peculiar case, and one that we suggest should not be viewed as continual learning (that is, when the agent can be optimal while _never_ updating its policy). We suggest there is a deep connection between the two views, and are actively exploring this connection in follow up work. We are happy to comment on their connections in the discussion section of the paper, and to discuss this further. --- Rebuttal Comment 1.1: Comment: I thank the authors for their thorough response. I maintain my positive opinion of this paper, especially in light of the authors' (tentative) updates to the draft. I believe this work is ambitious and useful to the community; and that the potential for this work to spur more technically-grounded future research in this area far outweighs the weaknesses pointed out by myself and other reviewers---weaknesses I also believe have been adequately addressed by the authors.
Rebuttal 1: Rebuttal: **[Overall Response]** First we would like to thank all of the reviewers for their time and energy in reading and commenting on our paper. We recognize this takes considerable effort, and we appreciate it **Summary:** Overall, our impression of the reviews is that there is a lot of enthusiasm around the ambition, importance, novelty, and high writing quality of the work, as well as its potential to open new research pathways and perspectives. Many of the reviewers raise excellent questions and suggestions, and we believe they can each be addressed, and that doing so will strengthen the paper. We here briefly summarize the main points that were raised across several reviews, and provide a more detailed response to each individual reviewer below. **Point 1: The abstract is too short.** We had originally thought a simple abstract would be a good choice, but we hear the reviewers’ suggestion, and will modify our abstract accordingly. We will plan to change our abstract to the following: _[Abstract draft]_ > In the standard view of the reinforcement learning problem, an agent interacts with an environment with the goal of efficiently identifying an optimal behavior. However, this perspective is based on a restricted view of _learning as finding a solution_, rather than treating learning as _endless adaptation_. Instead, _continual_ reinforcement learning refers to the setting in which the best agents keep learning forever. Despite the importance of this setting, the community lacks a simple, canonical definition of the problem that makes its primary commitments and concepts both precise and clear. To this end, this paper is dedicated to carefully defining the continual reinforcement learning problem. We formalize the notion of agents that “keep learning forever” through a pair of operators on agents that we call the “generates” and “reaches” operators that provide a new mathematical language for analyzing and cataloging agents. Using this new language, we define a continual learning agent as one that can be understood as carrying out an implicit search process indefinitely, and continual reinforcement learning as any setting in which the best agents are all continual learning agents. We provide two motivating examples of the setting, illustrating that traditional views of continual learning such as multi-task reinforcement learning and continual supervised learning are special cases of our definition. Furthermore, we establish necessary properties of both the continual reinforcement learning problem and the new operators. Collectively, these definitions, insights, and results formalize many intuitive concepts at the heart of continual learning, and open new research pathways surrounding continual learning agents. We are happy to make changes to the above draft, and we hope this resolves the reviewers’ concern. **Point 2: We could make additional contact with existing work on continual RL** This is a very good suggestion. We will plan to add an expanded discussion in the paper commenting in-depth about the relationship between our definition of CRL and prior work on CRL, with an emphasis on the line of work by Ring and the recent CRL survey by Khetarpal et al. Succinctly, the works by Ring emphasizes the generality of the environment model and the reward function, while the survey by Khetarpal et al. focuses explicitly on non-stationarity, similar to work on multi-task RL by Wilson et al. (2007) and Brunskill and Li (2014). Our definition directly accommodates and extends both of these views, and we are happy to include a detailed discussion about this in the paper. **Point 3: “Similarly, the authors mention several avenues for future research such as the examination of catastrophic forgetting - it might be interesting to include at least a characterisation of these problems in the new framework to inspire future work.”** This is another great suggestion. While these lines of research are still in their early stages (we are actively exploring them now), we do believe there is a new opportunity to model and analyze these important empirical phenomena using the language from our work, and we will plan to expand on this discussion in the paper. As one example of a connection to practical considerations, let’s consider plasticity loss. Plasticity loss can be well thought of using the conceptual tools we develop: _plasticity_ is the fraction of an agent’s corresponding agent basis that remains reachable to the agent over time, and plasticity _loss_ is any reduction of this number. We foresee opportunities to both diagnose agents from a new perspective, and to design algorithms based around this measure; for example, by developing learning rules (Defn. 3.2) that provably maintain plasticity (or analysing which existing learning rules maintain plasticity, and under what conditions). Regarding pathways toward defining new algorithms, we have two responses. First, as we argue after giving the definition of CRL (Defn. 4.1), our goal in defining CRL precisely is to encourage a departure in how we _think_ about designing agents: Given a basis, rather than try to build agents that can _solve_ problems by identifying a fixed high-quality element of the basis, we should instead focus on designing agents that continue to update their basis element indefinitely in light of experience. We believe this change in _perspective_ alone is needed, and can shape our basic approach to designing learning algorithms. Second, as a slightly more concrete proposal, we might explicitly characterize learning rules that are guaranteed to produce a continual learning agent of the kind defined by Definition 4.1—what are the necessary and sufficient conditions of such a learning rule? Answering this question can give rise to algorithmic primitives that are guaranteed to produce continual learning agents (which, as Dohare et al. 2022 showed recently, is not true of many standard learning rules).
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors propose a new definition of continual reinforcement learning where an optimal continual learning agent will not converge to a fixed policy. This is formalized through the generate operator, which defines the "searching" between different policies; and the reach operator, which defines whether the agent will stop searching. Strengths: 1. This paper provides a unique definition of the continual learning problem, which is formalized by their mathematical framework. 2. Based on their framework, the authors were able to prove a number of interesting theorems. Weaknesses: 1. It is not immediately obvious how this definition can inspire new learning algorithms. 2. The underlying concept that an optimal learning agent should not converge is not surprising. The author also mentioned that it's an "unsurprising conclusion that it is better to track than converge" for their toy non-stationary environment. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Can you elaborate on how this formal definition of continual learning can address practical problems such as plasticity loss, in-context learning, and catastrophic learning as mentioned in the Discussion section? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors mention the limitations implicitly as future work in the discussion section. Essentially, since this paper is almost purely theoretical, they treat the lack of empirical study as future work or limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First, we would like to thank the reviewer for their time and energy in reading and reviewing our paper. Below, we respond to each in point detail, and we are happy to update the paper to reflect the reviewer’s suggestions, and to continue the discussion. > It is not immediately obvious how this definition can inspire new learning algorithms. This is a valid point. Our view is that this paper is about clarifying what problem we are actually studying. Doing so will remove the ambiguity that has surrounded continual learning, and allow us as a community to ensure we are studying the right problem. Further, this clarity provides new language to talk about agents and continual learning more carefully. The promise, then, is that exploration of these new tools can yield new perspectives on the design of algorithms. For instance, our work emphasizes the view that an agent can be understood as a choice of a learning rule (definition 3.2) and a choice of agent basis (definition 3.1). These two objects provide a new abstraction on the components of an agent. Furthermore, this abstraction may yield new kinds of learning rules that can inform algorithm design. As an example, we might consider learning rules that are guaranteed to produce a continual learning agent of the kind defined by definition 4.1—what are the necessary and sufficient conditions of such a learning rule? Answering this question will give a path toward defining algorithms that are guaranteed to produce continual learning agents (which, as Dohare et al. showed recently, is not true of many standard learning rules). Or, as we explore briefly in Appendix C, we might consider learning rules that capture certain intuitive families of agents, such as a model-based learning rule. Combined with tools for thinking about continual learning, we could imagine delineating between continual model-based agents and regular model-based agents; such a distinction could give rise to new algorithmic principles and other algorithmic primitives, but working out the precise details is beyond the scope of the present submission. > The underlying concept that an optimal learning agent should not converge is not surprising. We agree, and we take the intuitive appeal of the statement “an optimal learning agent should not converge”, as a positive characteristic of our definition. Our goal is to make this intuition mathematically precise, which we believe we have done. > Q: Can you elaborate on how this formal definition of continual learning can address practical problems such as plasticity loss, in-context learning, and catastrophic learning as mentioned in the Discussion section? This is a great question, and our reasoning will be similar to our response to the first point above. Now that we have a precise mathematical language for talking about continual learning agents, we believe we are well positioned to unpack and examine the kinds of phenomena that regularly arise in continual learning. We provide a similar answer in the response to all reviewers ("[Overall Response]"), but for simplicity, here are our thoughts. As an example, let’s consider plasticity loss. We suggest that plasticity loss can be modeled precisely using the conceptual tools we develop: _plasticity_ is the fraction of an agent’s corresponding agent basis that remains reachable to the agent over time, and plasticity _loss_ is any reduction of this number. We foresee pathways to both diagnose agents from a new perspective, and to design algorithms based around this measure; for example, by developing learning rules (def 3.2) that provably maintain plasticity. This unlocks a new point of emphasis from designing learning algorithms that is grounded in a rich mathematical toolkit. Or, we can understand in-context learning precisely in terms of the kinds of agent bases that agents search through—certain kinds of bases might be sufficiently rich to be capable of in-context learning, while others are not. Therefore, in the same way that we precisely define a continual learning agent in definition 4.1, we might unlock a precise definition of an in-context learning agent, at which point we can study the necessary and sufficient conditions required of such an agent. This is one example, but the same kind of analysis is possible for a variety of agent-centric properties such as plasticity loss and catastrophic forgetting. --- Rebuttal Comment 1.1: Comment: I'd first like to thank the authors for the detailed response. I think this is indeed an interesting direction, and I raised my score accordingly. Perhaps a bit outside the scope of this work, but I feel that actually analyzing plasticity using a toy task under the proposed definition, for example, can make the paper a lot more compelling.
Summary: This is an ambitious paper. The paper notes that the problem of "Continual Reinforcement Learning" lacks a rigorous definition and seeks to provide one. It mathematically defines the reinforcement learning problem and then explores the conditions in which an instance of the RL problem is a CRL problem. The legwork is done through the new "generates" and "reaches" operators. The "generates" operator accepts two arguments: 1. a set of "basis" agents, and 2. a particular agent (which may not be in that set, but is exhibits some combined behavior of the basis agents). It returns boolean truth value. The "generates" operator expresses if there exists a learning rule that can search over "basis" agents (with respect to an environment / historical observations) and find a combination of them that is equivalent to the particular agent in the argument. This function is also undecidable in general. Loosely, the "reaches" operator (wrt an environment) accepts two arguments: 1. a particular agent 2. a set of "basis" agents It returns a boolean truth value. It returns True when an agent's behavior settles on an element in the basis set. The authors actually define a more precise "sometimes reaches" and "never reaches" operator. This is also undecidable. A learned agent will always choose a behavior equivalent to some basis agent in the context of the environment and history. Using these formalisms, a CRL agent is one that never reaches an agent basis, and a CRL problem is one where the best agent is a CRL agent. Notably, a problem is a CRL problem depending on the choice of basis. The paper performs mathematical analysis on these operators, relates them to two example instances of CRL, and provides a thoughtful discussion for new questions that can be asked about agent basis sets. Strengths: The informal definition "An RL problem is an instance of CRL is the best agents never stop learning" is intuitive and clear. I always appreciate explicit definitions for notation. (although defining logical negation, and the quantifiers might not be necessary in a conference paper). I type checked several of the equations and everything seems to work out nicely. The theory gives a good feel for how someone might implement a framework for exploring agents in the language of this theory using a strongly typed language. The paper gives explicit examples and experiments demonstrating intuitive properties of this theory in the setting of Q-learning and finite Markov decision processes. As well as a more complex example in continual supervised learning. The paper proves several basic properties of the new reaches and generates operators. I have not checked the proofs for correctness, but I also don't see any obvious problems. Section 5 made me delete a lot of my questions because it answered them. The idea of exploring choices of the basis is mathematically interesting. The paper raises more questions than it answers. Weaknesses: The abstract is terse - problematically so. Even if this succeeds at being a theoretically sound definition of continual reinforcement learning, I think it is important to elaborate at least a little bit more. Note this is the main reason I'm rating this a 5 / 10. Given a better abstract I think this is a 7/10, but I think it is critical to give a better overview to the researchers' attention you are competing for. A 9 page paper with a 20 page appendix full of proofs may be better suited for a journal. The paper raises more questions than it answers. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: On line 47 the authors explicitly call out that the set A and O are countable. Does this imply that some other (non-numeric - i.e. non-ℝ) sets may be uncountable in this definition? I think the order of appendix A and appendix B should be switched. Provide the notation first. (The table was very helpful in reviewing). I became confused when I first read Theorem 3.1. Particularly when the selected agent was not an element of the agent basis. In hindsight it makes sense that the new agent is a (linear?) combination behaviors in the agent basis set, but I think the paper could make that more clear. It wasn't intuitive what "switches" meant to me. Or that lambda(h) in Figure 1 was a combination of the basis agents conditioned on history. (At first I thought the learning rule was just choosing an element of the basis set). In Theorem 3.1 are the infinite choices of basis countable? If a problem's status as CRL or not depends on the choice of basis, and there are an infinite choices for the basis, then is this framework all that useful for thinking about problems? Theorem 5.1.2 suggests that it is possible to reduce any CRL problem to an RL problem via a change of basis. Is it possible to reduce any CRL problem to a RL problem by finding a function that changes the basis? Are there problems where such a mapping does not exist? Have you formalized any of the proofs into a proof assistant like Lean? I took a peek at the appendix, but I simply don't have the time to verify these proofs. Being able to point to an automatically checked proof in each theorem would greatly increase the quality of the paper. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: An optimal CRL agent could be difficult to rectify with the alignment problem. In contrast, it might be the case that deciding if a CRL agent will remain aligned with its creators is undecidable. That question is worth exploring. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First, we would like to thank the reviewer for their time and energy in reading and reviewing our paper. We recognize the reviewer spent a lot of time understanding and commenting on our work, and we appreciate it. Below, we respond to each in point detail, and we are happy to update the paper to reflect the reviewer’s suggestions, and to continue the discussion. > The abstract is terse - problematically so… We had originally thought a simple abstract would be a good choice, but in hindsight, we completely agree with the reviewer, and will modify our abstract accordingly. We will plan to change our abstract to the text provided in the reply to all reviewers ("[Overall Response]"), and are open to suggestions for improvement. > Swap Appendix A and B This is a great idea. We will swap the order of these two appendices. > I became confused when I first read Theorem 3.1. Particularly when the selected agent was not an element of the agent basis.. This is helpful feedback. We can spend time and improve the exposition of this result. > In Theorem 3.1 are the infinite choices of basis countable? Good question! The proof strategy of the result involves constructing a countable sequence of basis sets (where each of the sets will generate the original agent in the environment). Thus, the proof strategy only *requires* countably infinitely many bases. However, we suspect that there are augmentations to the proof strategy that also make use of uncountably infinitely many bases, so the argument can likely go through in both cases. > If a problem's status as CRL or not depends on the choice of basis, and there are an infinite choices for the basis, then is this framework all that useful for thinking about problems? This is a critical point, and one we have focused on in developing this work. We believe that this point is a feature of the definition, rather than a bug. To see why, we will plan to include the following text in Section 5 following the statement of Theorem 5. _It is reasonable to ask if the fact that our definition of CRL is basis-dependant renders it vacuous. We argue that this is not the case for two reasons. First, we conjecture that any definition of continual learning that involves concepts like “learning” and “convergence” will have to sit on top of some reference object whose choice is arbitrary. Second, and more important, even though the mathematical construction allows for an easy change of basis, in practice the choice of basis is constrained by considerations like the availability of computational resources. It is often the case that the domain or problem of interest provides obvious choices of bases, or imposes constraints that force us as designers to restrict attention to a space of plausible bases. For example, as discussed earlier, a choice of neural network architecture might comprise a basis—any assignment of weights is an element of the basis, and the learning rule $\sigma$ is a mechanism for updating the active element of the basis (the parameters) in light of experience. In this case, the number of parameters of the network is constrained by what we can actually build. Further, we can think of the learning rule $\sigma$ as something like stochastic gradient descent, rather than a rule that can search through the basis in an unconstrained way. In this sense, the basis is not arbitrary, nor is the learning rule. We as designers choose a class of functions to act as the relevant representations of a mapping from observations to actions, often limited by resource constraints on memory or compute. Then, we use specific learning rules that have been carefully designed to react to experience in a desirable way—for instance, stochastic gradient descent updates the current choice of basis in the direction that would most improve performance. For these reasons, the choice of basis is not arbitrary, but instead reflects the ingredients involved in the design of agents as well as the constraints necessarily imposed by the environment._ We hope the above answers your question, but we are happy to discuss this point further. > Is it possible to reduce any CRL problem to a RL problem by finding a function that changes the basis? Are there problems where such a mapping does not exist? This is a really interesting question, and we suspect that crafting a well-formed answer is a great direction to explore further. We do know that, by Theorem 5.1.2, if we were to include all of the optimal agents in the agent basis in question, then the problem is no longer CRL (since there is now an optimal agent that reaches the basis). However, finding a function to construct the optimal agents in this setting is not necessarily feasible. There might exist problems where, depending on the constraints imposed on the basis or set of learning rules, finding this function is either impossible or feasible. Clarifying such settings would be an interesting direction to pursue further. > An optimal CRL agent could be difficult to rectify with the alignment problem. In contrast, it might be the case that deciding if a CRL agent will remain aligned with its creators is undecidable. That question is worth exploring. In our view, most agents we as a community build and deploy will be facing a CRL problem, rather than a more traditional RL problem (i.e. those with a fixed solution). As a result, we believe it is an important open question how to frame safety and alignment research around CRL—we take this to be an important line of future work. In our view, it is a positive aspect of the work that it opens new lines of research (that are perhaps more well calibrated to the actual problems facing agents we design) surrounding safety. > Have you formalized any of the proofs into a proof assistant like Lean? We have not formalised the proofs into a proof assistant. --- Rebuttal Comment 1.1: Comment: Overall I like this paper, and I think the indended revisions are sufficient for me to raise my rating. > First, we conjecture that any definition of continual learning that involves concepts like “learning” and “convergence” will have to sit on top of some reference object whose choice is arbitrary. I buy this. I'd be interested in seeing a formalization of this conjecture.
Summary: In the paper the authors propose a formal framework for reasoning about continual reinforcement learning problems. To this end they introduce mathematical definitions for environments and agents which serve as a basis for defining the general reinforcement learning setting as well as the continual setting. In particular, the authors tie the problem of continual learning to the learning progress an agent makes and two operators describing its limitting learning behaviour. Finally, they provide two examples and derive sevaral characteristics of the proposed operators. Strengths: The problem of continual RL is most certainly relevant to the community and will arguably become more prominent as real-world agents adapting throughout their deployment become more common-place. To my knowledge the proposed formalism is original in its attempt to characterise this setting formally. Overall I found the framework sound and its description easy to follow. I agree with the authors that such a formalism has the potential to open up new perspectives on and analysis of (long-term) learning behaviour of agents. Weaknesses: Throughout the paper the terms "agent" and "behaviour" are used almost interchangeably. However, a formal definition of behaviours is missing, making it somewhat unclear whether or not the two terms are actually meant to be equivalent. If so, the statement "we can understand every agent as implicitly searching over a set of behaviors" needs clarification. If not, I would encourage the authors to formally define what a "behaviour" constitutes. A related issue is the separation of agents and their learning rules. Arguably, an agent parameterised by a recurrent network is a continual learner as it changes its "behaviour" (in the sense of the one-step policy) based on the history giving rise to its internal state. However, under the provided definition such an agent would not be considered a continual learner unless it updates its weights. Finally, the paper would benefit from a more in-depth discussion of how previous approaches to continual RL compare to the proposed framework or can be embedded into it. Similarly, the authors mention several avenues for future research such as the examination of catastrophic forgetting - it might be interesting to include at least a characterisation of these problems in the new framework to inspire future work. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First, we would like to thank the reviewer for their time and energy in reading and reviewing our paper. We address the reviewer’s primary questions and concerns below, and will plan to update the paper in line with the reviewer’s suggestions. > Definition of agent vs. behaviour… This is a great point, and one we will be sure to attend to carefully in the paper. We use the term “behaviour” early on in (and throughout) the paper to appeal to intuition before we have defined “agent” more precisely in Definition 2.4. The reason for this choice is that we find the phrase, “we can understand every agent as implicitly searching over a set of behaviors” easier to grasp than “we can understand every agent as implicitly searching over another set of agents”. In light of your comment, we will revise the paper to remove references to “behavior” entirely, or make it clear in some cases when we are using the term in a colloquial way to appeal to intuition (and that it will be replaced by the more precise “agent” after Def. 2.4). We hope this helps. > A related issue is the separation of agents and their learning rules. Arguably, an agent parameterised by a recurrent network is a continual learner as it changes its "behaviour" (in the sense of the one-step policy) based on the history giving rise to its internal state. However, under the provided definition such an agent would not be considered a continual learner unless it updates its weights. The reviewer makes an excellent point, and one that was at the heart of a lot of the puzzles we thought about as part of this work. In short, as per Theorem 5.1.2, depending on the basis, agents can be understood as learning or not. In an MDP, for instance, do we want to consider a fixed stationary policy as learning, simply because it reacts to a change in the MDP state? Or should we only consider explicit _updates_ to the policy as learning (or something else)? This tension is at the heart of our view of continual learning, and one we fully embrace by allowing the choice of basis to characterize what it means for an agent to learn: an agent is a continual learner with respect to a basis if the agent keeps switching between basis elements forever. We discuss the choice of basis in response to one of the questions of reviewer 9sbq, which addresses a similar point. In the case of the neural net described by the reviewer, if the agent can be understood as switching to a new basis element in light of experience, then the neural net is viewed as learning (that is, an agent updating its weights its not, own its own, sufficient to produce a learning agent—it must be that the agent actually updates its behavior in response to experience in the relevant way; and this could happen due to the recurrent state update, or due to weight change). > Finally, the paper would benefit from a more in-depth discussion of how previous approaches to continual RL compare to the proposed framework or can be embedded into it. Similarly, the authors mention several avenues for future research such as the examination of catastrophic forgetting - it might be interesting to include at least a characterisation of these problems in the new framework to inspire future work. These are both very good suggestions, thanks! We will add two pieces of discussion to the paper. First, as mentioned in the reply to all reviewers ("[Overall Response]"), we will add an in-depth discussion about the relationship between our definition of CRL and prior approaches to thinking about CRL, with an emphasis on the work by Ring and the recent CRL survey by Khetarpal et al. Succinctly, the work by Ring emphasizes the generality of the environment model and the reward function, while the survey by Khetarpal et al. focuses explicitly on non-stationarity, similar to work on multi-task RL. Our definition will directly accommodate and extend both of these views, and we are happy to include this discussion in the paper. Second, we can expand on connections to central empirical phenomena like catastrophic forgetting and in-context learning. For example, in-context learning might be fully characterized in terms of specific kinds of learning rules and agent bases (our definitions 3.1 and 3.2), where the base elements themselves are capable of more sophisticated learning. Plasticity loss can be captured in terms of changes to the capacity of an agent over time, as discussed in the general response— we believe these tools may help open new lines of study surrounding plasticity and related concepts. While these lines of research are still developing, we believe there is new opportunity raised to model and analyze these important empirical phenomena using the formal language from our work. --- Rebuttal Comment 1.1: Comment: I'd like to thank the authors for their thorough responses. I think the proposed changes sufficiently alleviate most of the weaknesses and am updating my score accordingly.
null
null
TART: A plug-and-play Transformer module for task-agnostic reasoning
Accept (poster)
Summary: This paper proposes and evaluates a method (TART) for improving the in-context learning of large, pretrained base models applied to downstream binary classification tasks. TART stacks a second transformer on top of the pretrained base transformer. The TART transformer is trained to rely on in-context examples to solve a large number of synthetically generated binary classification problems. TART’s training is task-agnostic in the sense that its synthetic training data is unrelated to the domain of the downstream tasks on which the combined base+TART model is evaluated. The experiments demonstrate that TART improves the combined model’s in-context learning across a range of binary classification tasks, and a diverse set of base models pretrained on language, vision, and audio data. Strengths: Given the profound importance of large pretrained transformer models today, and their remarkable ability to learn from in-context examples, general methods for further boosting the performance of in-context learning are of great interest. The paper’s proposed method, TART, is a well-motivated and novel approach. And the paper’s experiments clearly demonstrate the degree to which TART boosts the in-context learning of large, pretrained base models applied to downstream binary classification tasks. Exciting results! Finally, as noted in section 5 of the paper, the TART approach may lead to similar methods that improve LLM performance on a wider range of tasks beyond binary classification. Weaknesses: The paper claims to make three main contributions, which I summarize here: 1. WHAT CAUSES THE ICL GAP? Study why in-context learning does not perform as well as task-specific fine-tuning despite having access to the same information, via a representation-reasoning decomposition. 2. TART CLOSES THE ICL GAP Propose a new task-agnostic method, TART, which bridges the performance gap to task-specific methods and is trained using only synthetic data. 3. TART IS GENERAL Demonstrate that TART works across different NLP tasks for a range of model families. The same inference module generalizes to vision and speech domains as well. I follow this outline of claims in describing what I see as the paper’s weaknesses. Claim 1. WHAT CAUSES THE ICL GAP? The paper mounts an ambitious study of the gap in performance between in-context learning and task-specific fine-tuning, given a common set of examples. The experimental results are suggestive, providing ample motivation for the TART approach. However, the paper ventures far beyond the question of TART’s motivation when it attempts to draw broad conclusions about a clear representation-reasoning decomposition. The field has arrived at no settled definition of reasoning, so any distinction between representation and reasoning remains nebulous. Transformers themselves illustrate the complex, intertwined relationship between representation and reasoning. As a result the paper makes extraordinary claims (requiring extraordinary evidence) when it equates reasoning with probabilistic inference, and when it makes statements like the following: • “LLMs lack reasoning abilities” • “LLMs lack the ability to perform simple reasoning over their learned representations” • “this performance gap exists due to their inability to perform simple probabilistic reasoning tasks.” I believe that too much of the paper is devoted to pursuing such a challenging case, which would require far more evidence than the experiments provide. And I see it as unnecessary, since the experimental results are more than sufficient to provide adequate motivation for the TART approach. Furthermore, the paper’s heavy emphasis on its reasoning-centric terminology (like “reasoning module” or “probabilistic inference module”) obscures the relatively simple and limited nature of TART. To be concrete, the paper doesn’t need to wade into the philosophy of reasoning vs. representation to make the case that TART (as one might expect) boosts in-context learning by training a transformer (on task-agnostic, synthetic data!) to perform well at in-context learning. In summary, while I agree that the paper does cast some useful light on the performance gap between in-context learning and task-specific fine-tuning, much mystery remains there, so I don’t see this as a strong contribution of the paper, although it is sufficient to motivate the approach. Claim 2. TART CLOSES THE ICL GAP The experiments demonstrate to me that TART does succeed in largely closing the in-context learning gap, at least when given enough examples. But what happens when only a few examples are available at test time? The paper presents no performance comparisons against baselines tested on fewer than 20 examples, as far as I can see. Since the TART transformer is a task-agnostic module sitting on top of the base model, one would expect its performance to be no better than random in the absence of task-specific examples at test time. So how many test-time examples does it take for TART to show good results? Below 20 examples is an important range to explore, since in-context learning is known to be able to derive great benefit from as few as one or two examples. If it turns out that TART requires at least, for example, 10 examples to deliver benefit, then the paper would be over-claiming when it says that TART, “when composed with any LLM via its embeddings, generically improves upon its reasoning abilities.” The all-important technical details of TART’s training are poorly explained in the current version of the paper. Section 3.2.1 leaves too much to the reader’s imagination. Line 217 talks about updating the parameters $w$, which must therefore refer to the parameters of the TART transformer. But equation (3) also contains a symbol $w_t$ which cannot refer to the transformer’s parameters. Rather, the $w_t$ must refer to the randomly sampled weights for the linear layer used to define a binary classification problem that the transformer is trained on. At least this is my current understanding. The apparent conflict of notation needs to be sorted out, and eq. (3) should be explained much more carefully in the text. For instance, it’s worth mentioning that the angle brackets refer to a dot product, since this notation is rarely used in papers on transformers. The paper would be easier to follow if it used the term logistic regression correctly. Specifically, logistic regression is not a problem or a task. Rather, logistic regression is a method that can be used in solving classification problems. The term is used correctly on the following lines: Line 231 “For each problem, we train task-specific linear classifiers using logistic regression” Line 535 “To conduct linear probing over the embeddings, we perform logistic regression over the output embeddings of each model and the given labels in the training set using the built-in logistic regression solver from the scikit-learn python library, utilizing the lbgfs solver.” But the rest of the paper uses the term “logistic regression problem” as if logistic regression were a problem type rather than a solution method. Terminology for eq. (3) is tricky, because it involves (I believe) a linear layer of random weights which could be (but are not) obtained by the logistic regression method. And these random weights are used, along with random covariates $x$, to generate synthetic classification problems. Because the structure of this generator is related to the method of logistic regression, it is tempting to refer to the generated classification problems as logistic regression problems. But that’s conflating problems with methods. I suggest spending a few sentences to explain eq. (3) in simple terms, and replacing the phrase *logistic regression problem/task* with *binary classification problem/task* throughout the paper. It seems that positional encodings must be required in training the TART transformer, to establish the alternating positions of $x$ and $y$ vectors in the sequences. But I find no mention of positional encodings in the paper. Claim 3. TART IS GENERAL It is shown that TART works across different binary classification tasks and model families, including vision and speech domains. However, the results are all limited to binary classification, which is not a very general class. As explained in the discussion, “In future work, we seek to understand whether synthetic tasks exist for training other generic reasoning modules, capable of improving base LLM performance on tasks such as generation or summarization.” Such results would be highly significant. Until then, the current restriction to binary classification limits the significance of this contribution. MINOR SUGGESTIONS Line 130 “at par when compared with” should be changed to “on par with”. Line 191 “TART comprises of two components” should be changed to either “TART comprises two components” or “TART is comprised of two components”. Line 584 A transformer outputs a sequence of vectors, not embeddings. CONCLUSION Because of the problems noted in this section, I would vote to reject the paper in its current form. However, I have confidence that the authors can address all of these issues in the next version of the paper, so for now I’m optimistically voting for acceptance. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: I assume that TART stands for Task-Agnostic Reasoning Transformer. Is this explained somewhere? Line 589 If the context length is 258, and each example is one pair, then there would be 129 in-context examples, not 256. Right? I’m not sure what’s meant by “the binary labels y are encoded as a one-hot vector to match the input dimension $d$ of the corresponding covariates $x$.” Does this mean that only 2 of the 16 dimensions are ever 1.0, while the other 14 dimensions are always 0.0? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: No problems noted. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed comments and helpful feedback on our work. We are glad you found our approach well-motivated, novel and exciting. We completely agree with the three main contributions that you highlighted in your review. While we addressed some of these concerns in the general response (study on representation-reasoning trade-off, generalization to multiclass, and TART in low-data regime), we provide a detailed response below. **Reasoning vs representation.** We completely agree that there is no consensus in the field on a definition of a reasoning vs. representation, and much less so for in-context learning which is quite nascent. As we mentioned in the general response, this motivated us to make an attempt at explaining why ICL fails. We make a first step towards this understanding through our representation-reasoning decomposition which we mathematically characterize in Equation (1). We feel that this is an important and interesting contribution in and of itself, and while just a first step, we believe it will indeed be helpful and inspiring in spurring further research along these directions. Our representation-reasoning definitions as well as the analysis with Hypotheses H1-H3 is currently focussed on binary classification tasks in this draft. Our statements and claims are indeed supposed to be restricted to this setup. Following our new extensions to multiclass tasks (see attached PDF for experimental results), we plan to extend this analysis to the multiclass tasks as well. **TART in low in-context example regime.** For most plots in our paper, we study the performance of TART (and baselines) beginning with 16 in-context examples (k). In the attached pdf, we show that the TART works even in the extremely low-data regime (with k = 4, 6, 8, 10). TART can improve the base ICL performance by up to 22.2% (k=4), 30.1% (k=6), 24.1% (k=8) and 31.3% (k=10). See Table 3 in attached PDF. Furthermore, on 2 out of 3 datasets, we see that TART outperforms full model finetuning by up to 15.8%. While it is challenging to adapt the LLM for the task with such few data points, TART is able to better use the sample information. We will update our draft with a more extensive study across all benchmarks in this regime. **Beyond binary classification.** Thank you for this suggestion. In response to this feedback, we train a multiclass TART module which outperforms the ICL baseline by over 59 points (see attached PDF for more details). We realized that our training algorithm was not specific to binary classification tasks and could be easily extended to multiclass problems by using the multinomial logistic regression as the synthetic. We will update the draft to feature these extensions in Sec. 4 (Experimental evaluation). **Use of term logistic regression.** We have actually used the term logistic regression in an overloaded manner to refer to both the logistic regression generative model as well as the estimator obtained by solving the ERM with respect to this model. We will clarify this in the paper and make it clear which of these we are referring to at a given time. Note that the logistic regression model is a very specific generative model of binary classification which we use in our paper for the synthetics. **Minor Comments.** We address the some of the minor concerns here: - TART stands for Task-Agnostic Reasoning Transformer (mentioned in Sec. 3 header). We agree that the acronym could be explained earlier and we will reflect this change in our draft. - For context length = 258, the number of examples should be 129. We will correct this in the updated draft. - For the binary classification task, we only use the last dimension out of the 16 to set the labels – it is 1 when the label is positive and 0 when the label is negative. Such a mapping is quite common and has been previously used in [1]. *References* [1] Garg, Shivam, et al. "What can transformers learn in-context? a case study of simple function classes." Advances in Neural Information Processing Systems (2022) --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I applaud the authors for their detailed rebuttals, and especially for the additional experiments which addressed a couple of my concerns (TART when given few examples, and multiclass tasks). Unfortunately, the authors did not even mention my major points regarding the lack of clarity in describing the central method: The terse coverage of section 3.2.1 and especially Eq. (3), the apparent dual usage of the symbol $w$, and the key question of positional encodings. My initial rating was based on an optimistic assumption that a revised version of the writeup would be attached. Without reading it, I have little confidence that the published paper would explain the method well enough to benefit the community. For this reason, I am lowering my score from 5 to 4. --- Reply to Comment 1.1.1: Comment: Thanks a lot for taking out time to read our rebuttal. Unfortunately, the conference does not allow us to submit a revised version of the draft and we are not allowed to put in any text in the attached pdf. However, we do provide a concrete retelling of the story in the global response which will allow us to restructure and add more content to our revised draft. - We added the clarification to the dual usage of the symbol $w$ in response to Reviewer hArx (https://openreview.net/forum?id=ZXbgVm3PSt&noteId=IdKRUWCfRs). It is a minor typo and should instead be $\theta$. - We will pull up the details of the algorithm from Appendix C.1 to section 3 -- it is already there and simply needs to be accommodated with the existing content. - Yes, the default GPT-2 architecture that we use (see Figure 15 in the draft) has absolute positional encoding for each position. We will clarify this in more detail in the paper. We hope this addresses some of the concerns you mentioned above.
Summary: The authors study why in context learning doesn't perform as well as task specific fine tuning by decomposing the process of in context learning into representation and reasoning. They discover that the gap is due mostly to deficiencies in reasoning and propose a new method called TART to bridge the gap. They demonstrate that TART is task and model agnostic. Strengths: Originality: The authors note and address the problem that in-context learning underperforms model fine tuning. The paper's primary contributions are disentangling representation strength and reasoning ability and finding a way to measure them, then using the insight from their measurements to improve the model's reasoning ability and thus close the gap between in context learning and fine tuning. Both are very original. I thought the concept of training a reasoning model to do probabilistic inference using synthetic data sets was clever. The same for averaging embeddings of multi-token data points in order to piece together the base model and the reasoning module. I also like thinking of LLM reasoning as probablistic inference in the first place. That seems insightful. Quality and Clarity: The quality here is high. They provide extensive experimental results to validate their idea, comparing TART with baselines across modalities and models. Every concept is backed up by a full sweep of experiments, they answer questions before I have a chance to ask them. For example extending this to vision and speech went above and beyond. The writing is clear well structured and the methodology is clearly explained. Their figures and tables are clear and accessible. Significance: I believe the significance is high. The authors address a limitation of LLMs with a novel solution that is broadly useful. It has potential to significantly improve transformer model's in context learning performance across the range of tasks. Overall, great work. Weaknesses: There's a typo on line 339, it should be CIFAR-10 (classes plane and bird) and MNIST (classes 0 and 8) Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: In a real world application of this technique it would be inconvenient for the user to have to indicate which tokens belong to which data points/labels, which I think the embedding averaging method requires. For example if I understand right Sentence: The movie is good Label: Positive The user would have to manually indicate which tokens belong to the data (sentence) and label so it can be averaged. I can imagine using the base model to separate them out for you. Is that how this would work? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive comments. We are glad that you found our work interesting on both the problem and the solution front, and found the paper well-written and clear. **Demarcating examples**. This is an interesting challenge. Currently, our code takes as input a demarcating limiter (which is “Sentence: and Label:” in the example you provided) and uses that to identify which positions to take the average embeddings over. If you are working with a fixed prompting template, implementing this is pretty straightforward. Alternatively, you could use Streaming embeddings (Figure 24) which side-steps this issue entirely because it embeds one example at a time. **CIFAR <-> MNIST.** Thanks for pointing out the minor typo. We will correct it in the updated draft.
Summary: The paper studies why in-content learning achieves inferior performance compared with finetuning and adaptor, then proposes an LM-based inference module that learns to perform logistic regression based on the sample and previous (sample, label) sequence, where the sample and linear cutting-plane are sampled vectors. When testing, the base LLM encodes the in-content examples into single vectors, concatenates them with their corresponding labels to form the (sample, label) sequence and combines with the test sample to predict the test label. Experimental results show the proposed inference module can achieve comparable results without task-specific parameters. Strengths: The paper proposes a new way to classify a test sample based on training examples. The proposed inference module learns to predict the sample label given (x,y) examples, which share the same separation plane. By sampling enough various planes and input x, an LLM is possible to learn how to find the linear separation plane given (x,y) examples, thus correctly predicting the test label without further training. The proposed method may apply to meta-learning tasks as both settings are similar. Weaknesses: Although the proposed inference module is interesting, I can not see it has a significant advantage over linear probing. Firstly, TART requires base LLM to encode examples and then fed the average pooled vector into the proposed inference module. This process requires a similar amount of computation and memory as in-context learning, and this process must repeat k times in the proposed LOO embedding if given k examples. On the contrary, linear probing can encode each sample independently and then train one linear layer for prediction. Secondly, since the inference module is trained with linear logistic regression, the LLM representation must be linear separable for task-specific labels. Thus, it has the same upper bound as linear probing, while one linear layer should be much easier to train than a GPT-2 model. Line 10 claims "performance gap exists due to their inability to perform simple probabilistic reasoning tasks" and Line 51, "...prompt engineering or active example selection, focus entirely on the LLM’s learned representations." The authors do not clearly explain what the reasoning ability is and why prompts improve learned representations. I assume prompt engineering is proposed due to the unavailability of hidden representation inside GPT-3 or GPT-4. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: What do the "parameters w" in Line 217 and "w_t" in Equation 3 means? Are these two refer to different things? How does the TART obtain multi-way classification labels as it is trained with two-way classification? Can this method generalize to text generation with a much larger vocabulary? Authors may compare TART with linear probing besides FT-layer given the same amount of examples. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors addressed the limitations and broader impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the detailed comments and feedback. We are glad that you find TART interesting and see applications of our work to settings beyond ICL (i.e., meta-learning). **Comparison with linear probing.** As we highlighted in the general response above, our main objective with this work is to understand why a gap exists between ICL and finetuning, and provide an algorithmic way to fix this gap. The caveat is to devise algorithms that can obtain the performance of finetuning while maintaining the benefits of ICL (namely, no need for training at inference time, providing a natural language interface to the user). As a first step towards this bigger goal, we begin by studying this problem for the class of binary classification tasks. While linear probing (for each task separately) would get close to finetuning performance, one has to forgo the advantages of ICL to use it. Furthermore, it is not possible to extend linear probing beyond the realm of classification tasks. On the other hand, TART allows users to benefit from the ICL framework and provides a “no training” way to match finetuning performance. Additionally, it opens up the possibility of improving the generative ability of LLMs using synthetic tasks. This is a promising line of work, which requires a new set of tools than those in the paper, and one which we are currently exploring. Additionally, we would like to clarify that the adapter head baseline, which is included in the original manuscript, is actually a slightly stronger baseline than the linear probing method. The adapter head comprises a single linear layer composed with a sigmoid nonlinearity. TART performs 3.4% better than these adapters. **Generalizing TART.** In the current form, you could obtain multiclass labels using a one-vs-all approach by looking at the logit probabilities for each class. However, to show that our method generalizes beyond binary classification, we trained an extension of TART for multiclass problems using multinomial logistic regression as the synthetic data generating task. We find that our TART inference module improves the base ICL performance by up to 59%. Please see Tables 1 and 2 in the attached PDF for details. **Explaining reasoning ability.** Throughout the paper, whenever we say the ability to perform reasoning we mean the ability to solve simple probabilistic reasoning problems, such as the logistic regression model. We make this connection mathematically precise in Equation (1) in Section 2.3 (Line 149-150). The equation precisely defines what the representation and reasoning abilities are. The representation-reasoning decomposition in Eq. (1) allows us to formally state the hypotheses – H1, H2, and H3 —that we evaluate next. **Connecting prompt engineering with improved representations.** We agree that the connection between prompt engineering and improving representations was not elaborated on in the current version. We plan to revise and add this explanation: “Via prompt engineering users are searching the language space for a natural language template that best represents their task. This search enables the users to find the best representations for their specific task which makes the LLM more likely to solve it.” Thank you for pointing this out. **Minor typo.** The parameter $w$ in Line 217 should be $\theta$. We will fix this in the draft. --- Rebuttal Comment 1.1: Comment: I thank the authors for the response. My concerns about generalizing TART and Equation (3) have been addressed. But I agree with reviewer CKNq that the paper makes extraordinary claims about representation and reasoning. It requires much more evidence to support that the LLM can not perform simple reasoning. Without the claim, the proposed TART has its merits in improving ICL. It uses a GPT-2 to learn to approximate any linear transformation given examples (x_i,t, y_i,t). It is interesting and has similar concepts to meta-learning. However, it has several drawbacks: 1) You must sample enough w_t to span all possible distributions of the LLM output feature. 2) Why would one approximate the linear weights through extensive meta-learning instead of learning the linear weights, that is, linear probing straightly? With enough sampling and training costs, I think TART can outperform linear probing. But since Figure 5 and 6 shows the adapter surpass TART most of the time, I still have concerns about whether the merits are significant in practical use. Due to the above two points, I like to keep my rating unchanged but OK with both decisions. --- Reply to Comment 1.1.1: Comment: Thank you for your response and detailed feedback on our work. We are glad we were able to address your concerns on multi-class evaluations and Eq. (3). Please allow us to address the additional questions you asked. We agree with both you as well as Reviewer CKNq that our claims in Section 2.3 are restricted in scope to binary classification tasks and that it is not reflected in our writing. As we highlighted in our response to Reviewer CKNq that we will restrict the scope of those statements and make sure they are not taken out of context. In addition to this, we will make the analysis more general by extending it for multi-class classification problems as well. (From our response https://openreview.net/forum?id=ZXbgVm3PSt&noteId=Pc5b4H5FKn) > Reasoning vs. representation To make sure our claims are within the scope, we have updated: - Line 108: understand their relative performance for binary classification tasks. - Line 111: downstream binary classification tasks. - Line 144: This section investigates why this performance gap exists for binary classification tasks. - Line 151: insufficient reasoning abilities, i.e., the ability to learn linear decision boundary for a binary classification task - The hypothesis H1-H3 have been updated to focus on binary classification tasks. - Additionally, we will add an extra section in the appendix focussing on the same analysis with multi-class tasks. > On linear probing vs in-context learning Thank you for pointing out this question and we believe that the paper and our initial response failed to sufficiently outline the precise regime where TART is actually preferable to linear probing. We believe there are two regimes: - Single-task regime: In this regime, practitioners would like to improve the performance of a base model on a single task such as sentiment analysis. As your review highlights, linear probing is preferable to TART in this setting since it is computationally cheaper to optimize a linear model for a given task. - Multi-task regime: In this regime, practitioners would like to improve the performance of a base model over multiple tasks (e.g., all tasks on the RAFT benchmark) where in-context prompt is used to adapt the model for each task. This paradigm is reflected in numerous LLM benchmarks like OpenLLM leaderboard [1], HELM [2], and RAFT [3]. For these benchmarks, researchers have focussed on task-agnostic techniques like modifying pre-training data and fine-tuning the model on instruction/chat data. These are computationally expensive as they both require training the whole model. Our work argues that TART is intended for this regime. For instance, we show on the RAFT benchmark, TART improves GPT-NEO (125M)’s performance such that it outperforms BLOOM (176B), and is within 4% of GPT-3 (175B). The comparison to linear probing is to demonstrate that TART can match task-specific interventions while being task-agnostic. TARTs advantage over linear probing scales with the number of tasks that practitioners care about -- TARTs advantage over linear probing for one task is minimal, but the advantage for 100s of tasks (as captured by benchmarks and real-world user interactions with chat-models such as chatGPT) is substantial. This is because for each new task, linear probing requires practitioners perform task specific optimization (around 1 million parameters for sequence length 1024 and embedding dimension 1024). We are glad that you pointed out this question on "Why one cant one use linear probing" and we will update the manuscript with the above discussion distinguishing the two setups. We hope that this addresses your concerns and are happy to answer any more questions that you might have. We are really grateful for your response and insights. *References* [1] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard [2] Liang, Percy, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang et al. "Holistic evaluation of language models." arXiv preprint arXiv:2211.09110 (2022). [3] Alex, Neel, Eli Lifland, Lewis Tunstall, Abhishek Thakur, Pegah Maham, C. Jess Riedel, Emmie Hine et al. "RAFT: A real-world few-shot text classification benchmark." arXiv preprint arXiv:2109.14076 (2021).
Summary: The paper presents an recipe for adapting an LLM to perform classification tasks in a task agnostic manner. They first try to tease apart if existing "in-context" methods which construct prompts to describe the task and then ask for the inference result are not achieving great performance because of information extraction or reasoning ability. By constructing a linear probe experiment they get indications that it may be the reasoning ability which hampers performance. The authors then propose to improve the task performance by training an inference module to solve a generic task, here few shot logistic regression, and then show that they can apply this to the outputs of the LLM to obtain good performance on desired tasks. The down stream tasks considered range from sentiment classification, news article categorization to spam detection. They also verify that the performance gains on text tasks carries over to image tasks and speech. Strengths: * There is still a lot of scope and practical use in extracting information from a text model in a reliable way. * The method is novel in so far I can tell. * Contains reasonable baselines. Weaknesses: * Although I find it intriguing the paper leaves me more questions than it answers. I think it would be very beneficial to make the presentation much more concrete and map some of the down stream tasks clear and exemplified in the main text. They remain very abstract and lots of intuition that the readers should build up is lost that way. * Although I think the line of scientific inquiry is quite intriguing I don't find the paper mature enough to be published in the current form. I find the writing can be improved a lot and maybe the context can be expanded a bit. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: * Th1 seems to say that the error is controlled by how similar P_NL and P_syn are but they are really different. In some sense they should always be different because it's a different language with a very different structure. In NL we have a lot more expressivity and less regularity ? Did you do any experiment that broadens the task space to see if this improves ? * I think 3.3 should be more appropriately named how to leverage LLM embeddings. The embeddings are always the last layer it's more how they are used that changes ? * Are all the tasks considered binary classifications ? I guess line 339 should be corrected MNIST and CIFAR are switched. We can't easily evaluate this work when this is happening. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments on our work and we are glad you found our proposed method novel. First, we address your major concern regarding the presentation of our work, and then respond to your questions. **Quality of presentation.** While other reviewers (tMpn, CKNq) found the quality of our presentation to be high, we welcome your feedback on how it can be further improved. We will make the following changes to the final camera-ready draft to improve its readability: - Update the introduction section to better reflect our objectives and motivate our proposed method. We will use the outline from the general response to communicate our message better. - Add an additional section in the main body (derived from content from Appendix A) on using synthetic finetuning to improve a pre-trained model's ICL ability. - We will add some more details of our downstream evaluation tasks in language, vision and speech (expanding on lines 67-73). Currently, these are detailed in Section 4 with details deferred to Appendix D due to lack of space. - For Section 3.3, our choice of name “Which embeddings to take” was to reflect the fact that one can use the LLM in different ways to embed the input sequences (i.e., Vanilla embeddings or leave-one-out (LOO) embeddings). We agree that it might be more appropriate to focus on “How to embed” as opposed to “Which embeddings to take”. We will make this change. - We will fix the typos in the revised draft (MNIST <-> CIFAR). **Closeness of distributions P_NL and P_SYN in Theorem 1.** We would like to clarify that P_{NL} is the distribution induced on the embeddings of the natural language data rather than the data itself (see Lines 619-622). Furthermore, when we combine the TART module with the base LLM, we perform a PCA renormalization step (Lines 221-224) which projects the embeddings into the same space as the synthetics. These two facts combined ensure that P_syn and P_NL are indeed very similar to each other in our experiments. It is indeed surprising that projection+renormalized NL embeddings are close to the synthetic Gaussian distribution. We will add this discussion to Section 3.4 to make it clear. **Extensions of TART beyond binary classification.** In this work, we focus on the set of binary classification tasks across language, vision and speech domains. That said, based on the feedback from the reviewers, we provide additional experiments (see Table 1 and Table 2 in attached PDF) demonstrating that our method can indeed be extended to multiclass tasks, improving base ICL performance by up to 59%.
Rebuttal 1: Rebuttal: We thank all the reviewers for their thoughtful comments and feedback. We are glad that they found TART to be **novel** and **practical** (VGrP, CKNq, tMpn), **well-motivated** (CKNq, tMpn), and supported by **extensive experiments** (VGrP, tMpn, CKNq). Recall that our paper shows that in-context learning (ICL) is much more competitive with finetuning than previously thought. Further, we show that this is achievable by training an independent Transformer module entirely on synthetic logistic regression data. Based on the feedback, we clarify two points on our presentation quality and scope of TART. We will make the following small, yet important, changes to our paper: - **Presentation.** Our work presents two approaches to improving the ICL performance: (1) finetuning the entire model with synthetic binary classification data and (2) training an independent module (TART) on synthetic data. Following VGrP and hArx’s comments on presentation quality, we realized we had underemphasized point (1), which is a natural first step towards TART. Additionally, this point actually demonstrates that pretrained models do not learn how to classify in-context from standard pretraining objectives. The experiments for (1) were mentioned briefly in the main body (deferred to App. A) and we will update the draft with an additional section incorporating this. - **Scope of TART.** As a result of the questions posed by tMpn and CKNq, we realized that our TART module could be generalized in two useful ways: (1) to multiclass classification problems, and (2) in the low-data regime (4-10 examples). We conducted additional experiments (see attached PDF) and show that for multiclass problems, TART outperforms base ICL performance by up to 59%, far exceeding the gains we saw on binary problems. Further, in the low-data regime, it can be up to 31% better than in-context LLM accuracy and outperforms finetuning on 2/3 datasets. --- **Detailed response** We synthesize all major reviewer concerns and considerations, and show how we incorporate these in our presentation either through minor rewrites or with additional experiments. Our paper is about understanding why the ICL ability of pretrained LLMs has sub-par performance when compared with SGD-based finetuning. ICL has fundamentally changed how people use AI systems by providing a new natural language interface for specifying tasks. Yet, it still lags finetuning methods in performance quality (up to 30% on binary classification). Our objective is to understand whether this performance gap is an inherent limitation of ICL or not. - hArx suggests linear probing as an alternative. While we agree that it might be competitive with finetuning in several cases, it doesn’t address the main question that we consider: why is ICL under-performing finetuning? Additionally, linear probing takes away ICL’s intuitive interface from a user’s interaction with an LLM. We begin by asking why this quality gap exists: do the data representations learned by the models lack task-specific information? Or is there some deficiency in the model’s ability to use this information, i.e., reasoning ability? For classification tasks, our results suggest that pretrained LLMs indeed have the necessary information in the representations, but fail because of their inability to do simple forms of reasoning, e.g. linear classification. - We agree with CKNq that understanding this question is a challenging study to undertake, with little agreement in the literature. Our representation-reasoning decomposition is a first step towards formalizing this question (Eq. 1 provides a mathematical description). While Sec. 2.3 currently focuses on binary tasks, we will perform a similar analysis for multiclass tasks. As a first step towards improving ICL, we finetuned an LLM with synthetic binary classification problems – observe that we are updating the weights of the model here. While this procedure improves model performance on classification tasks by up to 17% (see App. A), such finetuned models might lose their generative capabilities. This brings about the question: is it possible to improve the ICL abilities of these models without interfering with its other pre-existing capabilities? - To partially address VGrP and hArx’s concern on presentation, we will add a section on this synthetic finetuning between Sec. 3 & 4. We undertook this challenging task of teaching a model a new generic skill without affecting existing capabilities. Rather than directly finetuning, we trained an independent Transformer module (TART) entirely on synthetic data to learn this classification skill. By applying this module to the output embeddings of an arbitrary pretrained LLM, we improved the model’s ICL ability to be competitive with finetuning (within 3% from a gap of over 20%). Additionally, this surprising finding demonstrates that pretrained models do not learn how to classify (or reason over embeddings) in the pretraining phase. While our work focuses on binary classification, we recognized (following tMpn and CKNq’s suggestion) that our procedure is not restricted to binary tasks. We trained a multiclass version of the TART module and our preliminary results suggest that it can outperform base ICL by up to 59%, more than the improvements on binary tasks. On CKNq’s suggestion, we also explored TARTs performance in the low-data regime (4-10 examples). Excitingly, we found that it can be up to 31% better than base in-context LLM and that it even outperforms finetuning on 2/3 datasets in that data regime (see attached PDF). - We will add detailed experiments for multiclass tasks and extend our current evaluations to the low-data regime as suggested by tMpn and CKNq. Further, we go beyond text and show that the same TART module can be combined with vision and speech models to enable ICL for these modalities. This suggests that this capability is somehow uniformly lacking across a range of foundation models. --- Pdf: /pdf/adcdb346198466f9b2a5965a035760a25e5edd9e.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Structured Semidefinite Programming for Recovering Structured Preconditioners
Accept (poster)
Summary: This paper develops a general preconditioning framework called matrix-dictionary recovery. This framework follows the matrix multiplicative weights update method and is applied to solve two classes of problems: (1) Two diagonal preconditioning problems: outer scaling and inner scaling This paper gives the first nontrivial approximation algorithms in nearly linear time, which is beyond the generic SDP solvers. (2) Graph-structured matrix families This paper proposes algorithms for perturbed Laplacian solver and the recovery of two structured matrices: M-matrix and Laplacian matrix. Strengths: (1) The proposed algorithm for computing diagonal preconditioners (Theorem 1 and Theorem 2) improves the method using general SDP solvers by $\text{poly}(d)$ factors when $\kappa_o^\star (\mathbf{K})$ and $\kappa_i^\star (\mathbf{A})$ are small. (2) The proposed framework for solving graph-structured matrices (Theorem 3, 4, and 5) obtains $\widetilde{O}(n^2)$ running time and improves the state-of-the-art methods which have running time $O(n^\omega)$ by virtue of general linear systems solvers. (2) This paper is well-written and easy to understand. Weaknesses: (1) The algorithms for diagonal preconditioning improve the existing method by a factor of $\text{poly}(d)$ at the cost of $(\kappa^\star)^{1.5}$. The applicable scenario is when $\kappa^\star$ is small but $\kappa$ is large. (2) The algorithms for solving graph-structured matrices have time complexity $\widetilde{O}(n^2)$ that improves $O(n^\omega)$. They work for the case when the input is dense but in practice, the input is often sparse. (3) Section 5 is placed at the end of the manuscript. It would be better to show the comparison with existing work in the Introduction part so that we can clearly know your contributions and advantages. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The proposed method for graph-structured matrices has running time $\widetilde{O}(n^2)$ which is nearly linear when the input is dense. (1) Can it be further improved when the input is sparse? (2) The existing work has complexity $O(n^\omega)$. Is it still $O(n^\omega)$ when the input is sparse? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your reviewing efforts. We are glad that you found the paper easy to understand. Regarding weakness (1), we note that our method computes a (constant-factor) optimal diagonal preconditioner, and typically algorithmic or statistical applications of diagonal preconditioning are interesting when the rescaled condition number $\kappa^\star$ is small (and hence the $\kappa^\star$ overhead of our runtime is also small). While we do not achieve the optimal dependence on $\kappa^\star$, we do outline an approach to improve our dependence to $\sqrt{\kappa^\star}$ in Appendix D of the supplement. Regarding weakness (3), we agree with your suggestion and will move various portions of Section 5 to appear shortly after the relevant results (i.e., Theorems 1-7) for improved readability. Regarding your questions concerning sparsity, the $n^2$ dependence in our algorithm is due to the inclusion of all $n^2$ single-edge Laplacians in our matrix dictionary, due to the possibility of there being any of the edges present. If the graph to be recovered has a known sparsity structure (i.e. it is known that all edges belong to a set $S$ with $|S| \ll n^2$), then we can restrict our matrix dictionary and obtain a runtime near-linear in $|S|$, but it is an interesting open direction to obtain a near-linear runtime on sparse graphs without this assumption. We note that in general, the inverse of the Laplacian of a sparse graph is not necessarily sparse. Further, in the case of M-matrices, the inverse is necessarily dense (Lemma 33, supplement). Finally, we discuss your question regarding existing work for sparse inputs. Current algorithms with runtimes listed as $\ge n^\omega$ are at least bottlenecked by solving sparse linear systems. Such algorithms with improved running times do exist in the recent literature (see e.g. “Solving Sparse Linear Systems Faster than Matrix Multiplication” and “Matrix anti-concentration inequalities with applications”), but these assume very sparse matrices, and yield runtimes which remain superquadratic (e.g., roughly $n^{2.27}$ time for $O(n)$-sparse matrices). --- Rebuttal Comment 1.1: Comment: Thank you for your responses! I will retain my score.
Summary: This paper studies preconditioning, which is one of the most important techniques in numerical linear algebra with numerous applications in optimization and machine learning. It proposes a general framework based on the matrix-dictionary recovery problem, where are given a matrix $M$ and $M_1,\dots, M_n$, and the goal is to find $w$ such that $\sum_{i\in [n]}w_i M_i$ approximates the optimal preconditioner for $M$. This paper develops a general-purpose solver for this problem. Then, they apply this framework to propose nearly linear-time algorithms for diagonal preconditioning (including outer scaling and inner scaling), semi-random regression, and structured liner systems (including dense matrix approximated by graph Laplacians, dense inverse Laplacians, and dense inverse M-matrices). Technically, their meta-solver for the matrix-dictionary recovery problem is based on the matrix multiplicative weights update method and employs packing SDP solvers to form the gain matrices. They also use other techniques, such as JL sketching, the homotopy method, etc., to implement their solvers. Strengths: This research question considered in this paper is crucial for many applications in both theory and practice. Their results show that a bunch of numerical linear algebra problems can be solved in nearly linear time, which was not known before. More specifically, for the diagonal preconditioning problem, prior to this work, we did not know how to $O(1)$-approximate the optimal condition number of the rescaled matrix without using a generic SDP solver, which takes $\Omega(d^{3.5})$-time. They improve it to a linear dependence on the number of non-zero entries of the matrix ($\leq d^2$). In addition, they also provide a tighter analysis of the Jacobi preconditioning method. For numerically solving linear systems, when the matrix is a perturbed Laplacians, prior to this work, the only approach is to apply generic matrix multiplication, which runs in $d^{\omega}$-time. Their new algorithm runs in about $d^2$-time when the matrix is well-approximated by some unknown Laplacian. Moreover, their algorithm also generalizes to larger families of dense matrices, which are very useful in many different applications. The framework of matrix dictionary recovery is also very interesting and useful. There are some previous results concerning the isotropic case with some restrictions on the condition number. In this paper, their solver works not only for the non-isotropic case but also for general condition numbers. This framework and the general-purpose solver will be very helpful in future studies of numerical linear algebra algorithms. Weaknesses: The running times of their preconditioning algorithms depend on $\kappa^{1.5}$, which may not be optimal. Moreover, the accuracy dependence is $poly(1/\epsilon)$. Also, the paper does not discuss the practical implications of their algorithms, e.g., is it possible to implement them in practice? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: In Algorithm 1, it is required that $\lambda_{\max}(M_i)\in [1,2]$. Is it just for convenience? What is the runtime bottleneck of Algorithm 1? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: The limitations are addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your encouraging review; we are similarly optimistic of the utility of the tools we provide for future numerical linear algebra problems. Regarding Algorithm 1, indeed the eigenvalue assumption was just for simplicity in stating our error bounds. Each $\mathbf{M}_i$ can always be rescaled to have maximum eigenvalue in $[1, 2]$. This can be accomplished via the power method (see Fact 3 in the supplement), whose runtime cost is dominated by other components of the algorithm. In particular, as you asked, the runtime bottleneck of Algorithm 1 (in terms of polylogarithmic factors and factors of $\epsilon^{-1}$) is the cost of calling approximate packing SDP oracles from prior work on Line 7, though this is an active area of research and any improvements therein would also reflect in our algorithm’s runtime. We also wanted to mention that while we do not achieve the optimal dependence on $\kappa^\star$, we do outline an approach to improve our dependence to $\sqrt{\kappa^\star}$ in Appendix D of the supplement.
Summary: In this paper, a framework is presented to compute approximately-optimal preconditioners in order to solve linear systems. In the case of diagonal preconditioning, an algorithm is provided whose runtime is (up to log factors) polynomial in the desired accuracy, optimal condition number of the re-scaled matrix (thus in the output conditioning) and the number of nonzero entries of the input matrix. This is explicitly stated in Theorem 1 and Theorem 2 for outer and inner scaling, respectively. In the case of structured linear systems, an algorithm is provided whose runtime is (up to log factors) quadratic in the size of the input matrix. The underlying algorithms are based on the so-called "matrix-dictionary" approximation semidefinite programs. They rely on these algorithms to solve a related problem called "matrix-dictionary recovery". Strengths: If they would be correct, the resulting algorithms improve current runtime of the best solvers given by SDP (by a factor of d sqrt(d)). The comparison with existing literature seems well-explained. Weaknesses: Not being an expert in this field and with a very limited amount of time (too short to read the 63 pages of supplementary material), it is completely impossible to judge whether the framework is correct or not. The authors just state the theorems and do not provide neither proof sketches nor hints that could possibly convince the reader. I believe that this work, possibly sound and surely interesting, is not a good fit at all, considering the currently requested NeurIPS format. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: l43: What is an M-matrix? The definition should be recalled. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: There is no limitation section provided by the authors in the main submission. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are glad you found our comparison to the literature well-explained, and that our results were interesting. We agree with your comment on M-matrices, which will be addressed in a revision. Like many theoretical papers that appear at NeurIPS, due to space constraints, the technical details of our proofs were moved to the supplemental material. We provided proof sketches for Theorems 6 and 7, the workhorse results for all of our preconditioning applications, on Page 7 (a full analysis of Algorithm 1 is in Pages 11-14 in the supplement). The remaining components of our framework (approximate matrix-vector access and recursive preconditioning), though technically tedious, are fairly routine in the literature. We believe our overall framework provides value to the community and should be verifiable to experts, and hence NeurIPS is an appropriate venue for publication. We hope our discussion elevates your view of our paper. --- Rebuttal Comment 1.1: Comment: Thank you for your responses, I will retain my score.
Summary: This is a theoretical paper that studies the problem of diagonal matrix preconditioning, where given a PSD matrix $A$, the goal is to find a (positive) diagonal scaling $W$, such that $WAW$ has a small condition number, given the promise that such scaling exists. This problem can be solved using SDP, but that is too expensive. Following previous work that uses spectral sparsification techniques, the authors give improved guarantees for this problem. In particular, previously it was only known how to compute an approximately optimal scaling when the resulting condition number is $\leq 1.1$, while the authors' result works for any condition number (and has a polynomial condition number dependency in the runtime). The authors use the matrix multiplicative weights algorithm, together with sketching techniques to efficiently implement its iterations, leading to a near-linear time algorithm for bounded condition number. The results can be generalized to other problem classes like graph-structured preconditioning. Strengths: - Preconditioning is a fundamental problem and this is a significant and original contribution to it. It is significantly stronger than previously known results. - The analysis looks correct and technically strong. - The presentation is thorough and the intro is well written. Weaknesses: - There is no empirical evaluation of the proposed algorithm, e.g. compared to Jacobi iteration, and so the practical impact of this approach is not clear. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Do the results extend to the case when we have both outer and inner preconditioning? - Since the algorithm is based on multiplicative weights, it seems natural to try and generalize this to the online setting. Are there any significant difficulties there? - Are there potential implications for preconditioned optimizers such as Adagrad, Shampoo, etc? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your kind review of our paper and your insightful questions. We are currently not aware of a variant of our algorithm (and matrix dictionary recovery framework) which extends to simultaneous inner and outer scaling, though it is worth noting that prior work [QGH+22] does obtain such a result via semidefinite programming. Obtaining such a variant is indeed an interesting open problem for future work. In practice, one could potentially alternate inner and outer scaling algorithms until they cease to make progress (as the algorithm gives a constant-factor approximation to $\kappa^\star$ as a certificate), but it is unclear to us how to prove the convergence rate of such a procedure. Regarding online variants, this is another intriguing extension of our algorithm (we assume the reviewer means that e.g., the matrix dictionary arrives online and we maintain a set of weights). This extension does not appear compatible with the way multiplicative weights is currently used in our algorithm, where it is called as a regret minimization procedure to reweight an estimate in response to “gain matrices” produced through a packing oracle, roughly measuring currently violated constraints. Our reduction requires knowledge of the set of matrices used in the dictionary in advance. Nevertheless, finding some type of online variant of our results is indeed an interesting open problem. Regarding Adagrad, etc. diagonal approximations to the full gradient second moment matrix are discussed in the original Adagrad paper as a method to improve runtimes at a hopefully-low overhead on the regret guarantees. The quality of the regret bound for the diagonal variant of Adagrad degrades from that obtained for the full matrix variant. However, the degradation factor (a ratio between traces of matrix square roots) is not directly comparable to our notion of approximation (a relative condition number), so our method does not have immediate implications for Adagrad (or related preconditioning techniques, e.g., Shampoo). We find the questions of 1) obtaining a diagonal preconditioning algorithm directly targeting the degradation notion in Adagrad, and 2) whether our optimal diagonal preconditioners obtain improvements in practice for adaptive gradient methods, to be interesting directions for future exploration. Thank you again for raising these multiple interesting directions for future work. We may add some discussion of these directions in the final version. We hope that the appeal of these questions highlights the utility of this submission in facilitating future research. --- Rebuttal Comment 1.1: Comment: Thank you very much for the detailed response.
Rebuttal 1: Rebuttal: Reviewers HjkM and pK6D asked about practical implementations of our algorithm. We agree that experiments are an important next step towards bringing the results of our paper to practice. Our primary motivation was theoretical: existing guarantees for the problems we study are off from linear time by (sometimes large) polynomial factors in the problem dimension. We close these gaps through our main results, which pave the way for more practical implementations. Our framework builds upon several tools in lines of active research with existing implementations, such as packing SDP solvers and sketching techniques, but which need to be combined in a careful way in order to obtain fast implementations of our overall algorithm; we leave this endeavor to interesting future work. We also note that Appendix C of the supplement provides a new constant-factor optimal analysis of the Jacobi preconditioner, which is already widely-used in practice for outer scaling.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Beyond Confidence: Reliable Models Should Also Consider Atypicality
Accept (poster)
Summary: This paper focuses on the question of uncertainty quantification in classification by introducing the atypicality during recalibration. This paper exhibits a highly explicit motivation, yet it suffers from certain deficiencies in definitions and errors therein, and I harbor reservations regarding the soundness of the approach posited in Sec. 4.2 Atypicality-Aware Recalibration. Strengths: This paper evinces a highly explicit motivation and lucidly illustrates the concept of atypicality posited therein through the use of schematic diagrams. The paper presents copious experimental results and supplements them with additional experiments in the appendix, which serves to facilitate the reader's comprehension of the experimental outcomes. Weaknesses: The presence of errors and deficiencies in definitions within the article renders this work somewhat unreliable. Although the article contains numerous derivations, the critical section, namely Sec. 4.2, lacks theoretical analysis. The author fails to analyze the validity and necessity of Eq. 2, and whether there exist alternative methods to the approach posited therein, or whether it is indeed necessary. These issues require requisite analysis. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Q1: In Fig.~1, it is unclear whether the example presented therein is a genuine instance from the experiment. If it is, then it is puzzling why specific values for atypicality and confidence are not provided, and instead, actual and predicted labels are employed. Q2: There are errors in definition of accuracy(Bm), what's worse, I can't find the formal defination of Bm. Q3: RMSE, which is used as a metric in the paper, but I can't find the definition of it. To my understanding, RMSE is a metric for regression rather than calibration. Q4: The necessity of Eq. 2, and whether there exist alternative methods to the approach posited therein, or whether it is indeed necessary. I could understand Sec. 4.1, and the natural selution is group-wise TS calibration, so more detailed analysis is required. Q5: Atypicality for LLMs. The defination in line 124 a(x) is a distribution, it seems different to the defination 2.1. Q6: How is the expected calibration error (ECE) computed on large-scale models? Is it obtained by taking the softmax of all possible values along the final dimension, or by normalizing the top-p values as confidence scores? Note: Use "Line xx" to describe the Eqs in the text is cumbersome and detracts from the readability of the communication. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 7E3C, Thank you very much for your detailed review and your kindness! We appreciate the time you took to review, and we are really excited that you find our motivation clear and supported with copious experimental evidence! Below are the responses to your questions: ### Response to Questions / Comments - Q1. The instances are indeed from the ImageNet-R dataset. We sampled those points according to the atypicality and confidence values and their quantiles. In the revision, we will add the quantile values to Figure 1. Thank you for this suggestion. - Q2. Thank you for pointing this out, we further clarified the meaning of $B_{m}$ in Appendix D.1. Just noting here too: $B_{m}$ here denotes the set of the data indices where the confidence of the prediction for the sample falls into the interval $(\frac{m-1}{M}, \frac{m}{M}]$ where $M$ is the number of bins. For instance, if we have 10 bins, $B_{1}$ contains samples with confidence between $(0, 0.1]$. - Q3. The definition is Appendix D.1 Equation 8. RMSCE is simply the root-mean-squared version of the ECE metric. This metric is based on the earlier works e.g. the cited paper [HMD18]. - Q4. Thank you for raising this point. Group-conditional calibration is possible when there is a natural definition of a group (as in the Fairness Experiments presented in Figure 4). However, when we do not have the definition of a group, it is unclear how many groups to form, according to what threshold, etc. In practice, we would face difficulties in picking the number of groups, finding thresholds to form groups, and justifying this choice. Supported by the parametric relation we observe in Appendix Figure 10 and therein, we follow the proposed methodology to overcome such practical hurdles. Indeed, Figure 10 suggests that there is a monotonic relationship between atypicality and temperature that could be captured by the simple parametric form that we have that requires only 3 parameters. - Q5. Thank you for raising this question, we believe it is important to clarify this. Intuitively, our atypicality measure aims to quantify how well-represented an input or a class is in the training data. $a(X)$ is a notion to evaluate whether an input is well-represented. The key idea is that a larger atypicality value indicates $X$ is not well-represented in the patterns seen during training. $a(X)$ could be implemented in a class-conditional way by looking at $P(X|Y)$ for each class $Y$, or in a marginal way by looking at the overall $P(X)$, and we adopt the one that is more practical to use. For LLMs, it is unclear how one would quantify $P(X|Y)$, however, the model itself is an estimator of $P(X)$, making this notion of typicality readily available. This quantity still informs us about whether the input prompt is well-represented w.r.t. the training data. We will also add further discussion on this to the Appendix for further clarification. - Q6: We use the post-softmax probability for the most probable class as the confidence, and compute ECE using this quantity. This is indicated in Equation 7, but we will also add further clarification on this to the Appendix. Thank you so much for all your questions and comments, we really appreciate your time and kindness! Please let us know if there are any further questions that you may have. --- Rebuttal Comment 1.1: Title: We would love to hear from you! Comment: Dear Reviewer 7E3C, Once again, we really appreciate your detailed review. As we are in the middle stage of the discussion period, we hope you find our responses useful, and we would like to ask if the questions you raised have been addressed. We would love to engage with you further if there are any remaining points. We understand that the discussion period is short, and we sincerely appreciate your time and help!
Summary: This paper is addressing the problem of atypicality in data, and how this impact performance and confidence. Strengths: Good perspective on the needs to consider atypicality in data Good presentation Good review on the links between atypicality in data and performance with confidence Weaknesses: N/A Technical Quality: 2 fair Clarity: 3 good Questions for Authors: N/a Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: N/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer vUKB, Thank you for your kind remark. Let us know if you have any questions or comments.
Summary: This paper proposes a series of "atypicality" measures to be used in complement to the more popular uncertainty metrics. The authors defined input and class atypicality for both classical classification tasks as well as NLG. Such atypicality measures are then combined with temperature scaling to improve calibration quality, as well as prediction sets (RAPS). Strengths: 1. The problem is definitely important, and this paper touches a lot of areas where the particular measures of "atypicality" could help. 2. Linking atypicality to conformal prediction is good. (I'd suggest using "prediction set" as opposed to "uncertainty set" though). This seems to lead to good results in Figure 5 as well. 3. AAR seems to improve ECE over the baselines significantly. 4. I actually think this paper has a lot of interesting ideas, such as Theorem 3.1 (but somehow constrained by the space and many are not fully developed.) Weaknesses: 1. I really think it is inappropriate to make the dichotomy of "Atypicality" vs "Confidence/Uncertainty". In fact, I would argue the various forms of atypicality in this paper are just a bunch of confidence measures. This paper actually also proposes several different variants of "atypicality", and in the uncertainty space, for example, monte-carlo dropout variance and entropy are both used to estimate uncertainty. Thus, I'm not sure what makes "atypicality" a separate concept. 2. In my opinion, this paper is proposing another calibration method, but puts everything in very different languages. It is also missing a lot of calibration baselines. In fact, recent works rarely focus on confidence calibration anymore, but more on full calibrtion [[1]](https://proceedings.neurips.cc/paper/2019/hash/8ca01ea920679a0fe3728441494041b9-Abstract.html), [[2]](http://proceedings.mlr.press/v89/vaicenavicius19a/vaicenavicius19a.pdf), [[3]](https://proceedings.neurips.cc/paper/2019/file/1c336b8080f82bcc2cd2499b4c57261d-Paper.pdf), [[4]](https://openreview.net/forum?id=p_jIy5QFB7). In fact, [4] also used density estimates to calibrate classifiers much like L115-L117 and Definition 2.1 I also think the "Recalibration" section in Section 6 needs a re-write, as it's uncommon to put conformal prediction in this context: Yes, conformal prediction could be considered as calibrating the distribution/p-value, but when we use ECE to measure model calibration, conformal prediction is just out of context. 3. Related to 2, I think the improvement of RAPS/APS would benefit mostly from full calibration as opposed to confidence calibration. The lack of such experiment is a limitation. 4. While the improvement of LLM performance is good, I don't see the connection between the main point of the paper and such experiments. See Questions as well. 5. Presentation is not clear, probably due to the attempt to jam in too much content. For example, L122-126 should be expanded as it is a very different thing in my opinion. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: 1. Related to W4, why is atypicality defined so differently for NLP tasks? The intro keeps suggesting that the input is atypical wrt a class (e.g. Figure 1). In Definition 2.1, for example, it is also conditioned on $Y=y$. However, L124 uses a completely different definition. In fact, even L125 ($a_Y(y)$) has a completely different meaning than Definition 2.2. 2. Intuitively, what's the rationale of assigning temperatures basing on atypicality? It seems like typical samples could share a temperature, but for atypical ones, it's not like atypical samples themselves are similar. 3. Are the theoretical guarantees still maintained after the modifications to RAPS? 4. Is Theorem 3.1 somewhat mechanical? That is, is this just due to large variance and the fact that $u-\mathbb{P}(Y=1|\hat{\mathbb{P}}_1(X) = u)$ tends to mean reverse? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: Mostly discussed in weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer AQHi, Thank you very much for your detailed review. We really appreciate that you find the problem important and could be helpful in practice; we share the same ideas! We are also very happy to hear that you find a lot of interesting ideas in the paper, thank you very much for your kindness! Below are our responses to your questions and comments: ### Responses to Questions / Comments - Atypicality vs Confidence: Confidence and atypicality are correlated quantities that are clearly not independent, so we agree that there is not a dichotomy per se. Our main message is to highlight if and when having the specific notions that we use could provide utility. As an illustration, let us think about the Bayes Rule: $P(Y|X) = \frac{P(X|Y)P(Y)}{\sum_{Y’ \in \mathcal{Y}} P(X | Y’)P(Y’)}$. For example, let us have two inputs $X_{1}, X_{2}$ where $X_{1}$ is an out-of-distribution example that is not from any of the classes in the problem, and we have $\forall Y’ \, P(X_{1} | Y’) < \epsilon$ for some $\epsilon$. Similarly, $\exists \tilde{Y}: P(X_{2} | \tilde{Y}) \gg \epsilon$ and this input is ‘in-distribution’. Looking at the Bayes Rule, we can have the same confidence distribution for the two points $(P(Y|X_{1}) = P(Y|X_{2}))$ as long as the likelihood ratios are the same, where $X_{1}$ is an OOD point, whereas $X_{2}$ is potentially from one of the classes. In other words, an atypical input $X_{1}$ that is not likely to be from any class can have the same confidence value as the typical input. Therefore, quantifying and accounting for both confidence and atypicality can provide value and in our study, we demonstrate the value in improving calibration and accuracy. - Comments on writing: We changed the term uncertainty set with prediction set in the paper. Similarly, we slightly modified the recalibration part of the related works to make the distinction for conformal and recalibration. We appreciate both of these suggestions. - Q1. Thank you for raising this question, we believe it is important to clarify this. Intuitively, our atypicality measure aims to quantify how well-represented an input or a class is in the training data. $a(X)$ is a notion to evaluate whether an input is well-represented. The key idea is that a larger atypicality value indicates $X$ is not well-represented in the patterns seen during training. $a(X)$ could be implemented in a class-conditional way by looking at $P(X|Y)$ for each class $Y$, or in a marginal way by looking at the overall $P(X)$, and we adopt the one that is more practical to use. For LLMs, it is unclear how one would quantify $P(X|Y)$, however, the model itself is an estimator of $P(X)$, making this notion of typicality readily available. This quantity still informs us about whether the input prompt is well-represented w.r.t. the training data. We will also add further discussion on this to the Appendix for further clarification. - Q2. The intuition is that the model’s predictions are overconfident for atypical points. A larger temperature leads to the reduction of confidence and thus larger uncertainty, and we indeed observe larger temperatures for atypical samples. Overall, it is a way to refine the model’s confidence to reflect the higher uncertainty. - Q3. We have not theoretically analyzed the setting for prediction sets. Unfortunately, we do not believe such an analysis could be fit in this particular paper given the space constraints, but we will add further clarification on this under the ideas to future work. In particular, utilizing the proof techniques in [1], one can prove that for conformal prediction in the regression setting, an initialization $f_0$ with smaller MSE will have a shorter prediction interval after the post-processing of split conformal. Future work can extend this analysis to the classification setting. - Q4. Thank you for your comment. The larger variance of $\hat{\mathbb{P}}_1(X)$ for more atypical $X$ is indeed one cause of our result. While this can generally explain large calibration errors, it cannot specifically explain overconfidence. That is, it does provide a reason for why the sign of $u-\mathbb P(Y\mid \hat{\mathbb{P}}_1(X)=u)$ is positive. Our theorem provides a more specific analysis to characterize the overconfidence phenomenon. Thank you so much for all your questions and comments, we really appreciate your time! Please let us know if there are any further questions that you may have. [1] Lei, J., G’Sell, M., Rinaldo, A., Tibshirani, R. J., & Wasserman, L. (2018). Distribution-free predictive inference for regression. Journal of the American Statistical Association, 113(523), 1094-1111. --- Rebuttal Comment 1.1: Title: We would love to hear from you! Comment: Dear Reviewer AQHi, Once again, we really appreciate your detailed review. As we are in the middle stage of the discussion period, we hope you find our responses useful, and we would like to ask if the questions you raised have been addressed. We would love to engage with you further if there are any remaining points. We understand that the discussion period is short, and we sincerely appreciate your time and help! --- Rebuttal Comment 1.2: Title: Thank you Comment: Thank you for the responses. In general, I think the authors' responses conform to my original understanding. I don't think measuring the quantities currently called atypicality is bad, but I was just saying these are conceptually different concepts. For example, if P(X) in NLP is atypicality, then there should be a clear distinction between class-conditional atypicality vs non-class-conditional atypicality. This is currently not the main point of the paper. Q3: I think the point of RAPS is the coverage guarantee. > an initialization with smaller MSE will have a shorter prediction interval after the post-processing of split conformal I agree, but I'm not sure why we need to mention RAPS if MSE is already the indicator. If there are experiments wrt to W3 then this might be more useful/convincing. Q4: Could you explain why this is *not* related to mean-reversion? My original question was essentially that if u is random, condition on $u-\mathbb{P}(Y|\hat{\mathbb{P}}(X)=u)$ being large, it seems like its expectation will be smaller mechanically (because whatever "noise" will mean-revert) --- Reply to Comment 1.2.1: Title: Thank you! Comment: Thank you for your response! - Q3: We believe without empirical evidence that we presented in the paper, it would not have been clear whether and how atypicality could be useful in conformal prediction (and how it would concretely relate to coverage). While we took a first step to demonstrate the utility of quantifying atypicality for conformal prediction, we agree that there is more empirical and theoretical investigation to do in the future work regarding full calibration and beyond, which we will note in our limitations section. - Q4: As one potential example, the results in Bai et al. show that under different activation functions, one can observe an underconfidence effect instead of overconfidence in a similar setting (e.g. see Equation 13 in Bai et al. for an activation function that induces underconfidence). Under such evidence, it is not clear to us whether there is or there is not more going on than general mean reversion. We are not broadly claiming it is not mean reversion but argue that this phenomenon is subject to further investigation. Once again, we appreciate that you find the ideas in the paper interesting, and you took the time to interact during this phase. Please let us know if you have any further questions, we are happy to follow up.
Summary: The paper questions the reliance of probabilistic classifiers on the confidence score alone for measuring reliability. This is an important question that has not been asked that rigorously in the machine learning literature. In particular, there are two notions of uncertainty: aleatoric and epistemic. A widespread consensus is to consider the softmax applied output of a neural network as the confidence of the classifier. However, it is not clear what kind of uncertainty that score captures. Literature in proper scoring rule would suggest that it measures the posterior probability of the input falling into some class, which is inherently an aleatoric notion of uncertainty. However, practitioners and researchers in the community also hope that such a score would be measure of reliability of rare / out-of-distribution / atypical samples, which is more of an epistemic nature of uncertainty. Therefore, there is some sort of conflation is happening on the nature of this score. A sample could be low confidence because it is inherently ambiguous (aleatoric) or is not well represented in the training distribution (epistemic). The paper opens up with this question, and gives arguments to quantify *atypicality* of the sample as well for more reliability. With a simple measure to quantify atypicality, it shows that there is a a correspondence between how atypical a sample is and how mis-calibrated it is. It then studies atypicality aware recalibration. There is a sufficient empirical evaluation. Strengths: 1. One of the major strengths of this paper is that it raises a very relevant question. Interpretation of the confidence outputted by the classifier has a tendency to suffer from serious conflation issues where it is not clear what it captures. The paper argues for a different notion that can settle this issue (or at least start interesting discussions in the community). I believe this is highly valuable. 2. While the operationalisation of the proposed atypicality score is simple, the scoring criteria is already good enough to support the arguments made in the paper in a satisfactory way. 3. Thorough empirical evaluation is done on different settings ranging from balanced supervised classification, imbalanced classification , Language modelling, recalibration, and conformal prediction uncertainty sets on a range of datasets. Weaknesses: 1. There are no major glaring weaknesses of the paper. Except that maybe it underplays itself, as in my mind, it puts interesting and insightful commentary on the nature of softmax output (or sigmoid for Binary classification) of the classifier. There is a great debate on what that is, see Appendix B.2 in [1]. The paper could be even more impactful if these discussions are taken into consideration. 2. Section 3.2: Although some theoretical results are presented that shows correspondence between atypicality and overconfidence in a simple model, it is not clear how these results apply to the other things in the paper. It would be good to draw other insights from it. Do these results provide any insight (or intuition) on why atypical examples would be mis-calibrated? Is this due to estimation errors? Personally, I thought about this question myself. And to think of it, it does make sense intuitively why mis-calibration might be an issue for atypical examples. Consider Binary classification, and assume that one is perfectly able to model $\eta(x) = \mathbb{P}(Y=1\vert X=x)$ where the evaluation of $\mathbb{P}$ happens with respect to the training data distribution $P$. Now for a sample $x_{new}$ that is not well-represented in $P$, it is not unsurprising to see that $\mathbb{P}(Y=1 \vert X=x_{new})$ would be low. However, there is no reason to expect that it would be calibrated. Could authors comment on or augment this intuition? [1] Tim Pearce et al. Understanding Softmax Confidence and Uncertainty. (2021) Technical Quality: 3 good Clarity: 3 good Questions for Authors: Some questions follow: 1. (Section 4.2): I'm not sure I follow the intuition behind Equation 2. $\psi$ is a function of input atypicality. What is the nature of $\psi$? Is it monotonic, as in it preserves the atypicality of the input. If I assume so, then for a typical input, let's say the atypicality score is 0, in this case $\hat{\mathbb{P}}_{\text{ARR}}(Y\vert X) = \frac{\exp\{S_Y\}}{Z(X)}$, which is not intuitive. I'm sure this is my misunderstanding, so would appreciate the clarification. Specifically, the nature of $\psi$ and $\phi$. 2. (Line 222): The paper shows that atypicality-aware recalibration improves the overall accuracy in imbalanced supervised classification. In my mind, this is problematic, no? Recalibration techniques should preserve the nature of classifier, is called the sharpness / refinement property of calibration maps. Furthermore, ECE roughly translates to difference in average confidence and average accuracy. And the high ECE in atypicality is the result of lower accuracy but higher confidence. One could improve on ECE by compensating for accuracy instead of the confidence, which in my opinion is not the goal of calibration. I would love to hear from the authors on this. 3. (Section 4.3): I understand that atypicality aware recalibration is useful in fairness methods. Although it is an interesting case study and not the major focus of the paper, could authors comment (and compare) on the notion of multicalibration in the literature. I think atypicality aware recalibration has the same spirit as multicalibration in fairness literature. 4. (Section 5, Line 266): For conformal prediction, the authors write "group points according to their confidence and atypicality quantiles...", do the authors use some function to club the confidence score and the atypicality score into a single score? Or some multiple testing procedure is used to accommodate both the score? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer y4KB, Thank you very much for your detailed review! We really appreciate to hear that you find that the notions proposed in the paper can start interesting discussions in the community and our empirical evaluation to be thorough. We really appreciate your approval and kindness! Below are our responses to your questions: ### Response to Questions / Comments - Q1. When $\psi(a(X))=0$, the output distribution reduces to a fixed distribution over classes that was estimated on the calibration set. This distribution can be seen to induce a prior probability over classes. $\psi$ trades off between this distribution and the model’s confidence. When the point is typical, the output distribution is closer to $\hat{P}(Y|X)$. Empirically, we indeed observe $\phi$ to have a monotonic relationship with atypicality. Note that Theorem 3.1 similarly suggests a monotonic relationship. If a point is atypical (say $\psi \approx 0$), then the linear model is more overconfident, thus we should have a larger correction (e.g. Larger temperature). - Q2. Thank you for raising this important question. Most of the accuracy improvements arise in the imbalanced classification settings. As we observe in our analyses, the model often puts less probability mass on the rare classes and puts more confidence on the typical(common) classes. If we add class-typicality in the recalibration procedure, we can fix this issue, and this can change(and increase) the accuracy. Indeed, in Appendix Figure 11, we observe that more atypical classes get larger positive corrections, which increases the predicted probability. - Q3. Indeed there are shared notions between multicalibration and our work. While multicalibration can be considered as a general framework to study group fairness, we are interested in a very specific definition of groups, which are atypical and typical groups. We believe that atypicality would be a useful notion that could be integrated into any multicalibration algorithm, which hopefully future work can study. - Q4. We split points according to both the atypicality and confidence. Namely, a group $G$ here is defined using 4 thresholds, namely $G = \{x: (l_{a}^{(G)} < q_{a}(x) \leq h_{a}^{(G)}) \land (l_{c}^{(G)} < q_{c}(x) \leq h_{c}^{(G)}) \}$ where $q_{a}(x)$ and $q_{c}(x)$ denote the atypicality and confidence quantiles for the sample $x$, $l_{a}^{(G)}$ and $h_{a}^{(G)}$ denote the atypicality lower and upper bounds for group $G$, and $l_{c}^{(G)}$ and $h_{c}^{(G)}$ denote the confidence lower and upper bounds for group $G$. Using a calibration set, these bounds are simply determined by the quantiles of confidence and atypicality statistics. - On your intuition & Section 3.2: Thank you for providing your intuition! First of all, the insights from the theoretical model indeed transfer empirically to more complex models. As one can note in Figure 1a or Figure 5-6, predictions for more atypical instances are more overconfident and have lower accuracy/coverage(for prediction sets). Further, we mostly agree with your intuition on the estimation errors, but this alone is not sufficient to explain why specifically overconfidence happens (and say, not underconfidence). For this point, the activation function becomes an important factor, and Bai et al. results that we build on provide a good discussion. - Pearce et al.: This is indeed a very interesting and relevant reference, thank you for bringing this to our attention! We will be sure to include this reference in our revision. Thank you so much for all your questions and comments, we really appreciate your time! Please let us know if there are any further questions that you may have. --- Rebuttal Comment 1.1: Title: Post-rebuttal comment Comment: Thanks to authors for the response, and for the nice Bai et al. reference (I missed it before). The response clarifies my questions. However, I would suggest authors to elaborate on these points (especially Q1 and Q4) in the main text. I do agree with other reviewers that presentation wise paper could be improved. However, I liked the central idea and goal of this paper, and the simplicity with which it goes on achieving it. I'm happy to increase my score as I think this paper raises important points. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Dear Reviewer y4KB, We truly appreciate your approval! We will elaborate on the explanations to Q1-Q4 in the main text. Thank you again for the time you took for interaction during the discussion phase.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Self-Supervised Learning with Lie Symmetries for Partial Differential Equations
Accept (poster)
Summary: The paper presents an approach for self-supervised learning of PDEs. The main approach uses Lie symmetries (similar e.g. to translational symmetries for images) in the solutions of differential equations not only for data augmentation (as has been done before), but for representation learning for downsteam tasks. Strengths: The paper is a pleasure to read. There are nice diagrams (perhaps even too fancy?) and a nice verbal presentation everywhere. The appendix comes with an introduction into the theory, which is also well written and correct. The literature is thoroughly cited (Of course, there is always more to cite, but the amount is huge and the choice are reasonable.) Many experiments are conducted. Even limitations and drawbacks are clearly stated, such that after reading the paper a really feel what works easily, what works with problems, and what is not yet known. I feel well-informed after reading the paper. Furthermore, the topic is clearly relevant for NeurIPS: it is a non-trivial machine learning question with relevant application domains. In particular the correct usage of PDE-terms is refreshing to see when reviewing NeurIPS papers about PDEs. Here, I could find no problems. All terms were used correctly, even though more could and should have been said about PDEs, but the paper only has 9 pages; more is saind in the appendix. This quality in mathematical writing is not the norm at NeurIPS! And, at the same time, the paper is not technical. Weaknesses: I could follow the paper. However, I have a strong math background that includes symmetries of PDEs. The authors try really hard (and are quite good) in making everything clear to the audience. However, I guess that much of the NeurIPS audience might have problems with some techniques used in the paper. After reading the paper, it was not clear to me what kind of data is necessary. In one of your experiments, you used data on a grid and were thus able to apply ResNet18. More details in this regard (what is possible? what is reasonable?) would have been nice. Quite a lot of data is necessary for training. If I have 10000 samples of a PDE, I kind of know the PDE anyway. This, together with the high dependency on several hyperparameters and the additional limitation (see below), is the main problem of this paper which make is "just a strong NeurIPS paper" instead of an outstanding paper. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I guess for a paper of this level ob abstraction, you do not need to define what a group and what a group operation is? (l. 115) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The approach is certainly interesting. However, where I am working with differential equations, one only knows the symmetries from the differential equations. Hence, I rarely see a need for this approach. Otherwise, limitations and open questions are given both in the paper (e.g. l. 99ff, 130ff, etc.) Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking their time to review the paper and providing valuable comments and feedback. We are glad the reviewer is happy with the quality of the experiments and the completeness of our manuscript. We address the reviewer’s questions and comments below. *** ### Question 1: >After reading the paper, it was not clear to me what kind of data is necessary. In one of your experiments, you used data on a grid and were thus able to apply ResNet18. More details in this regard (what is possible? what is reasonable?) would have been nice. Thank you for raising this interesting point. Going further, real-world data could indeed come in the form of irregular grids making the use of architectures such as ResNet more challenging. There, one could leverage neural networks for solving PDEs with graph based methods that handle irregular grids [1-2] and SSL frameworks that have been extended to architectures more suited to such data [3]. Additionally, SSL for computer vision comes with recipes for using transformers, which could also be leveraged for irregular grids although such an approach may require more data than ResNets or GNNs. We will add discussion around these points to the updated draft of our paper. As an aside, Lie symmetries are still implementable on irregular grids, since implementing Lie point symmetries only requires knowing the values of the solution at any given point (hence the name). This is not necessarily true of other types of symmetries such as Lie Bäcklund symmetries which require knowing the solution at a neighborhood around every point to implement them. ### Question 2: >Quite a lot of data is necessary for training. If I have 10000 samples of a PDE, I kind of know the PDE anyway. This, together with the high dependency on several hyperparameters and the additional limitation (see below), is the main problem of this paper which make is "just a strong NeurIPS paper" instead of an outstanding paper. Although our current approach assumes we know the family of the PDE, our datasets mix realizations that have different initial conditions and equation parameters such as kinematic viscosity or buoyancy. In that regard, having models that, as those we propose, generalize to unseen initial conditions or equation parameters is already very valuable. We also refer the reviewer to our answer to reviewer swdq, where we provide an experiment learning representations from a dataset mixing Burgers, KdV and KS realizations. Although very preliminary, this setup would remove the need to know the PDE family, while alleviating the data requirement for each equation. We also show in figure 4 that we already have improved performance with our approach with only a few thousands data points. While this may already be a big requirement in certain scenarios, this indicates that it can still be applied in more data-constrained regimes. ### Other relevant comment: > The approach is certainly interesting. However, where I am working with differential equations, one only knows the symmetries from the differential equations. Thank you for pointing this out. Indeed, knowing the differential equation allows one to derive symmetries. This is the most ideal setting. Nevertheless, symmetries can also be deduced by knowing what class of differential equations one PDE is a part of or knowing properties of the PDE. For example, all flow related equations share common symmetries such as translations and Galilean boosts (e.g., see the added shared experiment in response to reviewer swdq). Furthermore, many symmetries can be derived from known conservation properties by Noether's theorems. So in general, knowing an equation precisely is best, but this is not always required in practice. Thank you again for raising this clarification. We will comment on this further in our updated draft. **References:** [1] Belbute-Peres, Filipe De Avila, Thomas Economon, and Zico Kolter. "Combining differentiable PDE solvers and graph neural networks for fluid flow prediction." international conference on machine learning. PMLR, 2020. [2] Brandstetter, Johannes, Daniel Worrall, and Max Welling. "Message passing neural PDE solvers." arXiv preprint arXiv:2202.03376 (2022). [3] You, Yuning et al., “Graph Contrastive Learning with Augmentations”, NeurIPS 2020 --- Rebuttal 2: Title: Opinion after the rebuttals Comment: I have read all reviews and all rebuttals. Having given the most positive rating, I will try to justify my unchanged opinion. - All reviewers agree that the paper is well-written. - All reviewers agree that the method is (mostly) correct. - The main open point regarding correctness is the boundary conditions. In my corner of differential equations, the boundary conditions are more like additional information rather than an integral part of the differential equations. Hence, at least in my corner of differential equations, the raised problem is less problematic. - The primary point of difference is the amount of novelty. Since I feel that differential equations, especially partial differential equations, pose really challenging questions, the level of novelty I see in this paper is sufficient for me to consider it a valuable contribution to NeurIPS. I acknowledge that this point is highly subjective.
Summary: This paper proposes to learn general-purpose representations of PDEs from heterogeneous data by implementing joint embedding methods for self-supervised learning. Learned representation outperforms baseline approaches for invariant tasks such as regressing the coefficients of a PDE and improve the time-stepping performance of neural solvers. Strengths: - Propose a general framework for self-supervised learning in PDE by using symmetry transformation in PDE. - Learned representation can be used in several tasks, such as determining unknown parameters and time-stepping. - This paper is well-written and the presentation is clear. Weaknesses: - The novelty of this paper is quite marginal. The SSL framework adopted in the paper is well-known and not customized for the specific PDE problem. The only contribution is the augmentation of PDE solutions according to symmetry groups, which is also well-studied in previous literature, such as in [1]. - The evaluation on time-stepping is obviously not enough. For example, this paper doesn’t compare with several important baselines, such as MPPDE [2], FNO [3], UNet, PINN, etc. I think the authors need to show the effectiveness of using learned representation to improve at least a few of these models on limited labeled training data. - The regression task of determining external buoyancy force (a constant value) in the NS equation is quite simple in practice. In practice, there are more complicated forcing terms. For example, in FNO [3], they use a forcing term containing cosine and sine waves. Can authors use the forcing term in FNO and regress the parameters in that forcing term? - The viscosity in the NS equation is also an important parameter. Can authors also provide regression results on determining the viscosity? [1] Brandstetter, Johannes, Max Welling, and Daniel E. Worrall. "Lie point symmetry data augmentation for neural PDE solvers." International Conference on Machine Learning. PMLR, 2022. [2] Brandstetter, Johannes, Daniel Worrall, and Max Welling. "Message passing neural PDE solvers." arXiv preprint arXiv:2202.03376 (2022). [3] Li, Zongyi, et al. "Fourier neural operator for parametric partial differential equations." arXiv preprint arXiv:2010.08895 (2020). Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - Why do authors only consider Lie point symmetry for PDE? Is there any other symmetry group that applies to PDE? - If a PDE does not have periodic boundary conditions, does augmentation using symmetry group still valid? - Can elaborate more on why learned representation cannot improve time-stepping performance for the Burgers equation? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 1 poor Limitations: The limitations of this paper are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and insightful comments. We address the reviewer’s questions below and at times, challenge their criticisms. We welcome further discussion. *** ### Novelty We acknowledge that SSL for computer vision and the study of symmetry groups of PDEs are *separately* well-established. We respectfully disagree on novelty, and share examples that shed light on our contributions: - The role of data augmentation in supervised learning [1] and SSL varies fundamentally. In SSL, augmentations dictate properties preserved by the encoder, and no learning happens without them. As further illustration of this, our conclusions differ from [1] on the utility of augmentations. For instance, time translations often impede learning good representations for Burgers' or Navier-Stokes, contrasting with findings in [1] (Tables 3d and 4d). - The Lie-Trotter techniques for implementing augmentations (Appendix B) are new to the ML for PDE community as far as we are aware, and enable more smoothly and universally applying augmentations. This implementation may even be useful outside of SSL. For comparison Ref. [1] applied Lie symmetries by choosing an arbitrary order for the basis elements, which neither guarantee universality (there exist augmentations which cannot be performed) nor smoothness (the Lie algebra has no guarantees on norm of augmentation). - SSL comes with many moving parts to which subsequent representations may or may not be sensitive [2]. For example, it was unclear a priori that using the generic task of coefficient or initial condition regression as metrics for pre-training evaluation (replacing the ImageNet top-1 accuracy in computer vision) would select embeddings that transfer nicely to other tasks such as time-stepping. SSL algorithms based on joint embedding architectures have application in various regimes outside its original use in computer vision, including published works for graphs [3], language [4], point clouds [5], and more. These works elucidate where SSL is useful, how it can be adapted, and the extent to which SSL can be applied for learning real-world data in general. Overall, we believe it is very valuable to share our insights gained for PDE data. ### Improving other models such as FNO We acknowledge that using our representations to improve other baselines would make this work stronger. We refer the reviewer to our general comment, where we show the effectiveness of the SSL representations to condition FNOs, Fourier U-Nets, and U-Nets with a supplementary conditioning method, AdaGN. ### Determining external buoyancy in the NS equation From a certain viewpoint, regressing the buoyancy force (a constant), within the NS equation might appear straightforward. One can simply calculate the derivatives and functions in the PDE and regress it. However, this is not the case here: Crucially, our network is agnostic to the specific form of the PDE. To regress the buoyancy force, it has to disentangle the parameter from the complex, nonlinear map of the PDE. Even in a supervised setting, the performance is far from the buoyancy's resolution, showing that it remains a hard task empirically. This makes our task valid to gauge the quality of our representation. ### Navier-Stokes regression results on viscosity The NS benchmark we use (PDEArena) comes with a fixed viscosity. We did not have sufficient time to generate a new dataset with varying viscosities. However, we will add this task in a revised version of the manuscript. ### On the choice of Lie point symmetry for PDEs We primarily focused on Lie point symmetries due to their systematic derivation and well-established applications in PDEs. We acknowledge that other forms of symmetries, such as approximate, Bäcklund, and discrete transformations, can also be considered. However, these are typically challenging to derive, may introduce other sources of errors and, like Lie symmetries, are typically derived in infinite domain settings (so do not address boundary issues). When learning data from multiple different types of PDEs, these may be useful but this would require different implementations and changes to our setup. ### On augmentation using symmetry groups in non-periodic boundary conditions You have touched upon an important limitation that we also point out in our work. Since Lie symmetries do not preserve boundaries, we limit and test the magnitude of transformations to avoid large errors (see discussion beginning on line 142). In short, addressing boundary conditions more directly would either escalate the complexity of the task or divert us from the generality we aimed to maintain. Please refer to the general response and response to reviewer VTPv for a more complete answer. ### On conditioning time-stepping for Burgers equation It is likely that the dynamics of Burgers’ equation are easy to predict with little room for improvement compared to KdV and KS given the same resources. A similar observation is made in [1] (Appendix D, Tables 3 and 4), where the normalized MSE for Burgers is far lower than that for KdV and KS, when provided with a few hundreds of samples. **References:** [1] Brandstetter, Johannes et al. "Lie point symmetry data augmentation for neural PDE solvers." ICML 2022. [2] Balestriero, Randall, et al. "A cookbook of self-supervised learning." arXiv preprint (2023). [3] Xie, Yaochen, et al. "Self-supervised learning of graph neural networks: A unified review." IEEE TPAMI (2022). [4] Chuang, Ching-Yao, et al. "Debiased contrastive learning." NeurIPS 2020. [5] Jiang, Li, et al. "Guided point contrastive learning for semi-supervised point cloud semantic segmentation." CVPR 2021.
Summary: In this paper, the authors propose to use self-supervised learning for obtaining an embedding that can robustly be used for predicting some quantities of interest or for time stepping. Particularly, they use the joint-embedding framework for SSL. They use symmetry groups for training the embedding to be invariant to these symmetry groups and thus capture the underlying behavior of the PDEs. Strengths: + novelty in using Lie symmetry groups for SSL training. + Improvement in training physics-based models in much lesser time. + covered a comprehensive list of symmetry groups for different PDEs. This would be very useful to the community. Weaknesses: - For a person from an engineering background, I do think SSL is quite similar to the ideas of Multi-fidelity modeling (https://arxiv.org/abs/1609.07196, https://arxiv.org/abs/2110.04170, https://arxiv.org/abs/1903.00104). Both help reduce the training data and both are very easy to implement. Unfortunately, SSL for the quit nh - The authors talk about taking steps for building foundation models, I would imagine some kind of amalgamation of all the PDEs while training the SSL. Like you use all the data from all the PDEs to build and train an SSL model. That seems closer to how foundation models would behave. I feel like the paper gives big hopes and doesn't deliver them in the implementation. If they were to not talk about foundation models and just say they build SSL frameworks for PDEs, I would have been very happy. - The authors make a comparison between supervised ones and SSL-trained ones. but, there are a whole host of PDE-residual-based models like DeepONets, FNOsetc. (which the authors acknowledge in the supplement). It would be nice to know if SSL is even needed for PDE-based foundation models. Can PDE-residuals sufficiently help us navigate through foundation models? Is there a need for SSL? Thats not established fairly. (note that I acknowledge the comparisons, but is that sufficient to establish the necessity of SSL for PDEs?) Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the weaknesses. Further, there are more questions that are not particularly weakening the paper but could make it clearer. 1. Figure 5 and the corresponding text is very interesting to me. It seems that the results from the Lie transformation(0.0038 compared to 0.0078 in supervised) and Crop (0.0052 compared to 0.0078) are not that different. It seems to me that applying some kind of SSL training is more important than using a particular Lie transformation per se. Is it worth the effort to find out the best possible selection of Lie point augmentations? Further, is this selection task dependent? How does one train a generic model for several tasks then? 2. It would be nice to see some additional information on the computational overhead from SSL compared to supervised training. (see weaknesses, ideally, I would like to also know the comparison with DeepONets) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking their time to review the paper, praising the novelty of our work, and providing valuable comments and feedback. We address the reviewer’s questions and comments below. *** ### SSL and Multi-fidelity modeling Part of the reviewer’s response was cut off. Could they please clarify what remained in this comment? ### On foundation models for PDEs Indeed, we mention foundation models for PDEs as part of an aspirational research program, but only take the first step towards this goal. We will correct the phrasing accordingly. Moreover, to bring our work closer to this ambitious goal, we added new experiments training a common representation from the mixture of our Burgers, KdV and KS datasets (all models of chaotic flow that share many Lie point symmetries), using crops and experimenting with different sets of Lie augmentations that are common to all three. We evaluate the representations on parameter regression tasks and report averaged results over 3 runs. Our preliminary results are encouraging yet show that mixing data from different equations is not straightforward: | Task | Burgers kin. viscosity (%) | |-----------------------------|--------------| | Supervised | 1.55 ± 0.01 | | SSL features | 1.53 ± 0.02| On Burgers we match the supervised baseline in a short training regimen (50 total epochs), showing that the model can learn good representations with heterogeneous data sources. However, in the short time frame of the rebuttal, we could not get conclusive results when evaluating on KS or KdV. Mixing the equations requires addressing new challenges (e.g dealing with different chaotic behavior and different time and length scales between PDEs). It is a very interesting direction for future work. ### Is SSL needed for foundation models? What about PDE-residuals? The reviewer raises some good points about the necessity of SSL, and if we understand correctly, their inquiries cover two related issues: (1) the comparative effectiveness of SSL in constructing foundation models, and (2) the requirement of SSL in light of PDE-residual-based methods, which can operate in semi-supervised or unsupervised environments. Addressing (1), to clarify: we are not asserting that SSL is categorically the superior approach for building foundation models or representation learning methods. Nevertheless, based on very strong results in the realm of image processing [5], we are confident in SSL's potential when applied to PDEs. For (2), we concur that many potentially effective PDE-residual-based methods exist. However, these methods seem to serve a different purpose, i.e learning an approximation to a differential operator, while SSL attempts to learn a rich yet easy to leverage feature space. Our added experiments with FNO (see summary response) show the latter can benefit the former. Could the reviewer elaborate on the comparison they expect? ### Selection of augmentations Using crops as the only augmentation provides a decent baseline. However, useful Lie point augmentations are required for best performance. We have evidence of this for both regression/classification tasks and time-stepping. We report two new experiments in this regard (see Table below): For Burgers’, we trained a representation with the crop augmentation only. It does not outperform the supervised baseline in kinematic viscosity regression as opposed to the representation trained with crop and Lie point augmentations. For Navier-Stokes, representations with MSE of 0.0052 (crop only) and 0.0038 (crop + Lie augmentations) reported in Fig. 5 for buoyancy regression are actually different. Our best time-stepping model (UNet + AdaGN conditioning) conditioned on the crop-only representation does not outperform the baseline, in contrast to conditioning on the representation trained with crop and Lie point augmentations. | Task | Burgers kin. viscosity (%) | Navier-Stokes time-stepping MSE (1e-3) | |-----------------------------|--------------|-------------| | Supervised baseline | 1.18 ± 0.07 | 2.37 ± 0.01 | | SSL features (Crop only) | 2.3 ± 0.2 | 2.9 ± 0.8 | | SSL features (Crop + Lie augmentations) | 0.97 ± 0.04 | 2.35 +- 0.03 | It may be possible to improve on a given task by `"overfitting” the augmentation parameters, but it is observed both in SSL for computer vision [5] and in our work that a good set of augmentations for SSL provides the best results across a wide range of tasks. In our experiments for example, the best representation evaluated during pre-training works out of the box for time-stepping (Table 1). ### Computational overhead of SSL vs supervised SSL pre-training typically has higher training costs compared to just supervised methods. Crucially, pre-training is a fixed cost, and subsequent training costs for downstream tasks starting from SSL features are negligible: one linear layer on top of frozen SSL features is usually sufficient, whereas supervised learning requires tuning the whole network to get similar results [1-2]. Since SSL features are computationally cheap to use and transfer better [3-4], the SSL approach is advantageous when features are reused. Since the purpose of SSL and DeepONets seem different (see comment above), could the reviewer elaborate on the comparison they expect? **References:** [1] Chen, Ting, et al. "A simple framework for contrastive learning of visual representations." International conference on machine learning. ICML 2020. [2] Bardes, Adrien et al. "Vicreg: Variance-invariance-covariance regularization for self-supervised learning." ICLR 2022. [3] Ericsson, Linus et al. "How well do self-supervised models transfer?." CVPR 2021. [4] Tian, Yonglong, et al. "Rethinking few-shot image classification: a good embedding is all you need?."ECCV 2020 [5] Oquab, Maxime, et al. “DINOv2: learning robust visual features without supervision”, arXiv preprint (2023) --- Rebuttal Comment 1.1: Comment: 1. Sorry, my previous comment was somehow cut off. I wanted to say that it would be unkind to ignore all the work on multi-fidelity modeling. The authors claim clearly, "SSL attempts to learn a rich yet easy-to-leverage feature space" which is exactly the purpose behind multi-fidelity models. In conclusion, I would like the authors to acknowledge that SSL is another way of creating models with "easy-to-leverage feature spaces" and there are prior works to do so using multi-fidelity models. 2. Given the results that the authors have shared about mixing the differential equations, I think the authors must tone down the claims about foundation models as there are more steps than just scaling the current framework. Nevertheless, this is certainly a step forward toward the goal. I am happy to see this! 3. While I appreciate the clarity of the authors in the response, I do think both SSL and neural operator-type methods would be needed. The example results of FNO and FNO + SSL look good. I would like to see something similar with DeepONets vs. DeepONets+SSL. In other words, a foundation model may not just be SSL, but we may also need ingredients of a neural operator to be incorporated. Therefore, I would like to see the SSL in conjunction with the neural operators. --- Reply to Comment 1.1.1: Title: Response to reviewer clarifications Comment: Thank you for clarifying your question and adding further feedback. We have responded to the additional comments below. *** ### On Multi-Fidelity Modeling and General Discussion of Other Feature Learning Methods Thank you for pointing out the connection between multi-fidelity modeling and our approach using SSL. We appreciate your feedback and agree that recognizing related works is crucial for a holistic understanding of our work's context. We concur that both multi-fidelity modeling and SSL can reduce dependency on extensive training data and have practical implementations. Despite these similarities, the two methods serve somewhat different overall purposes, and exploit data in unique ways. Multi-fidelity modeling primarily combines data and models of varying fidelities. Common goals include training models “using data from different levels of fidelity” [1] or enhancing “accuracy by injecting a small set of high-fidelity observations” into less accurate models [2]. In contrast, SSL aims to harness salient features from diverse data sources without being tailored to specific applications. The techniques we employed capitalize on the inherent structure of our dataset, especially through augmentations and invariances. Looking more broadly, we acknowledge that SSL is not the sole approach to feature learning. There exists a myriad of techniques, including metric learning, kernel design, autoencoders, and others [3-4]. We opted for SSL due to its proven efficacy in fields like computer vision and the direct analogy offered by our data augmentations. Nonetheless, you are correct that there are some high-level commonalities between multi-fidelity modeling and our approach. We appreciate this observation, and will add multi-fidelity modeling to the discussion of related work in the final revision. ### On Foundation Models and Mixing Differential Equations Thank you for your appreciation of the new mixed equation experiment. As stated in our response, we will tone down the language surrounding foundation models and share more details of the mixed equation experiments in the final draft. ### On Experiments with DeepONets Thank you for emphasizing the potential synergy between SSL and various neural operator-type methods. Your insight on the possible complementarity between these approaches is well-taken. To your point on expanding our experiments: While we have showcased the benefits of integrating SSL with the FNO, a representative neural operator architecture, we understand the value of further extending this analysis to other architectures like DeepONets. In preparation for the final paper, we commit to providing experimental results with DeepONets in tandem with SSL. We would like to underscore, however, that while additional architectures will certainly offer a broader perspective, the underlying message of our work regarding the usefulness of SSL remains consistent. We resonate with your viewpoint that the strength of SSL is amplified when used in harmony with other state-of-the-art techniques, thereby underscoring the collaborative nature of innovation in this domain. **References:** [1] Fernández-Godino, M. Giselle, et al. "Review of multi-fidelity models." arXiv preprint arXiv:1609.07196 (2016). [2] Perdikaris, Paris, et al. "Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling." Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 473.2198 (2017): 20160751. [3] Kaya, Mahmut, and Hasan Şakir Bilge. "Deep metric learning: A survey." Symmetry 11.9 (2019): 1066. [4] Williams, Christopher KI, and Carl Edward Rasmussen. Gaussian processes for machine learning. Vol. 2. No. 3. Cambridge, MA: MIT press, 2006.
Summary: This paper presents a general framework for self-supervised learning in a PDE context. In a way that is principled and natural, PDE symmetry groups are used to make the requisite augmentations from which self-supervision will learn structure; the augmentations are selected carefully so as to keep the regressed quantity (for the downstream task) constant. The utility of this method is demonstrated by regressing quantities of interest on four downstream PDEs, and the results indicate that the improvement of predicting from self-supervised representations is considerable relative to straightforward supervised learning. This work paves the way for adapting self-supervision to the physical sciences by way of carefully considered symmetry group-oriented data augmentations. Strengths: 1. The high-level idea is relatively novel, in that I do believe this is the first work that aims to make use of self-supervised learning for partial differential equations, with augmentations performed naturally according Lie symmetries. The most closely related work [12] uses Lie point symmetries of PDEs in order to augment PDE datasets, but this happens purely in the context of supervised tasks. 2. On the four PDE datasets presented/four equations considered, the results provide a fairly convincing improvement over just supervised learning alone. The mean squared error of prediction is considerably reduced (Table 1). 3. In terms of writing and presentation, the paper is quite good. The field is well introduced, with relevant literature cited. The idea is clearly presented and figures are given to illustrate the methodology (Figures 1, 2, and 3 were all quite instructive). Weaknesses: 1. Although the idea of using Lie symmetry-based augmentation for self-supervised learning is novel, I was underwhelmed at the rigor with which these augmentations were applied. Symmetries of PDEs are with respect to infinite domain or periodic boundaries. The boundary conditions imposed in the paper violate such symmetries, and hence make the justification for this sort of operation theoretically dubious. The authors admit that, naturally, because they violate these symmetries, they can only implement the group operations with "small strengths." I believe that the work would benefit from further investigating how to preserve these boundary conditions during augmentation, thereby providing a theoretically sound basis for the self-supervised learning proposal. 2. One of the interesting discoveries of this paper is that learning on top of self-supervised representations is considerably better than just immediately employing supervised learning techniques (Table 1); this is in stark contrast to what we see in e.g. computer vision, where both exhibit roughly the same performance. Some cursory post hoc analysis is given to explain this observation, although no further investigation is done in terms of providing a good understanding (be it theoretical or empirical). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: My questions to the authors are listed below: 1. Have you put more thought into preserving boundary conditions during augmentation and/or rectifying the fact that the symmetry transformations applied are no longer the proper symmetries, since the system no longer has infinite domain? 2. Have you done any further investigation into why self-supervised learning gives results that considerably exceed that of supervised learning alone? ### Verdict In this paper, the authors consider the novel idea of applying self-supervised learning to PDEs. Although they lay the basic groundwork (e.g. performing augmentation via Lie symmetries), I feel that more consideration is needed to find a way to preserve boundary conditions during augmentation, such that the work is theoretically principled. The results are promising, in that the proposed approach considerably outperforms supervised learning, but little investigation is done as to why (this is a surprising/interesting observation and I would like to see more of an explanation given). As such, I recommend a borderline reject rating for this paper. ### Additional Comments and Minor Corrections The writing and presentation was, for the most part, very good (as mentioned above). I would recommend clarifying some of the table headings in results figures, for example, changing "Best strength" in Figure 5 (left) to "Best strength ($\epsilon$)." A few minor corrections are given below: L221: "test samples.As shown in" -> "test samples. As shown in" L229: "for to evaluate the models" -> "to evaluate the models" L253: "buyoancy" -> "buoyancy" L254: "which the hardest evaluation setting -> "which is the hardest evaluation setting" ## Post-rebuttal update After reading the authors' rebuttals, overall, I think the paper will make for a positive contribution to the conference. I have increased my score to a 5. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: The authors have adequately addressed limitations of their work in the "Discussion" section (Section 5). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper and for the valuable comments and suggestions. We are glad that you found our high-level idea novel and our work well-presented. We acknowledge the concerns raised and would like to address them as follows. ___ ### On preserving boundary conditions during augmentation and/or rectifying the fact that the symmetry transformations applied are no longer the proper symmetries Your point about the use of Lie symmetries and their association with infinite domain or periodic boundaries is well-taken. As discussed in our draft, we agree that more rigorous treatment of this matter could fortify our approach. This is, in some ways, a rather fundamental challenge in SSL: a similar situation arises in the image setting, where symmetries such as resize or translation also incur boundary issues (for instance, if an important piece of the image is omitted by this augmentation). Importantly, our use of crops as an SSL augmentation mitigates this issue by biasing the network to learn local features that are in some sense robust to errors on the boundary. In essence, cropping ensures that learned features are invariant to whether the data was collected at a boundary or not. Both in the PDE and the image setting, this use of crops is crucial to the success of SSL. In more formally dealing with boundary conditions, we explored a number of ideas which are listed below. All of these ideas either significantly increased the complexity of the task or lost sight of the generality of the SSL PDE approach, so we opted to leave them to future work. - Approximate symmetries and discrete symmetries offer a way to explore many more symmetries, some of which may preserve boundary conditions. However, deriving these is no longer systematic: in the Lie symmetry case and in the approximate case, they provide solutions that are accurate only up to the order of the magnitude of the symmetry operation. These are also typically derived in settings with infinite domains. So apart from offering more flexibility in the symmetries, it is not obvious these directly help the boundary issue. - Another class of symmetries commonly studied are Lie Bäcklund symmetries [1], which offer a means to transform solutions from one PDE to another, but these are also typically derived in infinite domains. - Symmetries can be enforced at an infinitesimal level by using the Lie derivative defined as \begin{equation}\lim_{t\to 0} \frac{f\bigl(\Phi^t_X(p)\bigr) - f\bigl(p\bigr)}{t},\end{equation} where $\Phi^t_X$ is the exponential map for the Lie algebra vector field $X$ applied for an amount $t$. This Lie derivative can be enforced to be close to zero in the domain of the PDE outside of boundaries. Though this is nice in practice, the Lie derivative does not apply an augmentation, and we do not know how to apply this in SSL settings. To summarize, at least in this first work introducing the method, we wanted to offer a general approach to SSL for learning PDE data that was relatively simple to implement. The above approaches would have greatly complicated our approach, for questionable gain and loss of generality. As evidenced both in existing self-supervised work for images and our experiments, the representations learned even by these imperfect augmentations still contain rich information for downstream tasks. We welcome more thoughts and discussion from the reviewer on this point. ### Why SSL works better In general, understanding when and why SSL works is an active area of research. We know from prior theoretical and experimental results that, when done right, SSL pre-training finds feature spaces that are suited to diverse downstream targets and less prone to fitting trivial features [2-4]. From analyzing loss plots, we have some evidence that this is the case here. We have added loss plots for kinematic viscosity regression (Burgers') in Figure 1 (see attached PDF) explaining the observed gaps in Table 1 of our paper between “Supervised” and “SSL features”. Both methods rely on a ResNet18 architecture. In the “Supervised” setting, this network is trained from scratch, whereas for the “SSL features” method, the network was pre-trained with our SSL framework, subsequently frozen, and then a linear model was trained on top of the output features. “SSL features” (i) are rich yet can easily be leveraged with a linear layer, allowing fast convergence towards small test errors and (ii) are less prone to overfitting than supervised learning while using the same architecture. The gaps observed in Table 1 might be reduced via longer “Supervised” training or other efforts, but this would go out of the scope of a fair evaluation. SSL features are simply competitive and easier to use. **References:** [1] Ibragimov, Nail H. CRC handbook of Lie group analysis of differential equations. Vol. 3. CRC press, 1995. [2] Chen, Ting, et al. "A simple framework for contrastive learning of visual representations." International conference on machine learning. PMLR, 2020. [3] HaoChen, Jeff Z., et al. "Provable guarantees for self-supervised deep learning with spectral contrastive loss." Advances in Neural Information Processing Systems 34 (2021): 5000-5011. [4] Cabannes, Vivien, et al. "The ssl interplay: Augmentations, inductive bias, and generalization." International Conference on Machine Learning. PMLR, 2023. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: I have read the rebuttal as well as the rebuttals to other authors' concerns. Cumulatively, I think the work lays a reasonable foundation and sets the stage for interesting follow-up work. I have increased my score to a 5.
Rebuttal 1: Rebuttal: ## Summary Response We thank the reviewers for their insightful comments and many great questions. We have responded to each reviewer’s comments separately, and are sharing a summary response covering the common threads in the reviewers’ responses. To enhance our responses, we have added experiments to back up many of the points we make below. We truly value further input from the reviewers on these matters. *** ### Replicating experiments for different architectures: Multiple reviewers pointed out that they would be more confident in our results if we replicated time-stepping for different architectures, especially those that operate in Fourier space. We agree with this perspective, and are happy to share new results for different architectures: - For Burgers: FNO1d, conditioned as detailed in our work. - For Navier-Stokes: Fourier Neural Operator [2] with SpaSpec conditioning [3], Fourier U-Nets with Addition conditioning [3], and an additional conditioning method of U-Nets, AdaGN [3]. | Burger's (NMSE) | ResNet1d | FNO1d | |-----------------------------|--------------|-------------| | No conditioning (baseline) | 0.110 ± 0.008| 0.184 ± 0.002 | | Representation conditioning | 0.108 ± 0.011| 0.173 ± 0.002 | | Navier-Stokes (MSE x 1e-3) | U-Nets + Addition | U-Nets + AdaGN | FNO + SpaSpec | Fourier U-Nets + Addition | |-----------------------------------------------------------------|-------------------|----------------|----------------|--------------------------| | Time conditioning only (baseline) | 2.60 ± 0.05 | 2.37 ± 0.01 | 13.4 ± 0.5 | 3.31 ± 0.06 | | Time conditioning + representation conditioning (ours) | 2.47 ± 0.02 | 2.35 ± 0.03 | 13 ± 1.0 | 2.37 ± 0.05 | ### Handling boundary conditions: As two reviewers have pointed out and as detailed in the limitations section of our work, boundary conditions can be a source of error when implementing Lie point symmetries. Since virtually all symmetry derivations are done assuming infinite or periodic domains, it is not necessarily obvious how to mathematically handle these boundaries in derivations of symmetries. In practice, this technical issue is not as major as one would expect: - A similar situation arises in image settings where augmentations like resize or translation cause issues on the boundary. However, the goal is to learn global features that are invariant to these boundary issues. In pursuit of that higher level goal, SSL is still effective at learning with these augmentations included. - Crops are an essential augmentation both here and in the standard image setting [1]. For PDEs, this is partly because crops help bias the network to learn features that are invariant to whether the input was taken near a boundary or not. - There is no obvious technical solution to this problem as far as we are aware. Indeed, other forms of PDE symmetry groups like approximate symmetries, discrete symmetries, or Lie Bäcklund symmetries are also derived in infinite domains and do not address boundary issues. These symmetry groups are typically much harder to derive, more complicated, and do not necessarily come with the nice Lie algebraic structure associated with the Lie point symmetry group. There is much more to say on this point and our more detailed thoughts are captured in our response to reviewer VTPv which we recommend that reviewers look at. We are open to more ideas and happy to continue the discussion on this point. **References:** [1] Chen, Ting, et al. "A simple framework for contrastive learning of visual representations." International conference on machine learning. PMLR, 2020. [2] Li, Zongyi, et al. "Fourier neural operator for parametric partial differential equations." arXiv preprint arXiv:2010.08895 (2020). [3] Gupta, Jayesh K., and Johannes Brandstetter. "Towards multi-spatiotemporal-scale generalized pde modeling." arXiv preprint arXiv:2209.15616 (2022). Pdf: /pdf/2bb5c87ae71e328167cdea1bd898646ce7128ceb.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Revisiting Area Convexity: Faster Box-Simplex Games and Spectrahedral Generalizations
Accept (poster)
Summary: This paper focuses on box-simplex games that are min max of degree 2 polynomials being bilinear in the inner and outer optimization variables. The inner problem is over the simplex and the outer problem is over the unit hypercube. The goal of the paper is to unify the previous framework derived in [She17] with standard Bregman divergence domination conditions. This latter framework [She17] consists on exploiting the primal-dual nature of the min-max problem to setup a regularizer over the unit hypercube. This satisfies the so-called "area convexity" condition instead of strong convexity. The outer loop method relies on an extragradient method and the inner loop relies on an alternating minimization sub-procedure. In order to analyze this framework, the authors explain the connections with relative smoothness and extragradient methods. This yields an algorithm to solve box-simplex games that improves the one of [She17] by a log factor. Strengths: The paper is well-written and I believe quite clear for non-experts in this field. The authors are willing to bridge the gap between area convexity and more standard convexity tools, that makes the contribution original. The proof sketches sound correct. Weaknesses: I am not a specialist in this field and with a very limited amount of time (too short to read the 39 pages of supplementary material), it is completely impossible to judge whether the framework is correct or not. The authors stated the main theorems and did provide proof sketches and hints that could possibly convince some readers. I have doubts that this work, possibly sound and surely interesting, could be a good fit with NeurIPS. There are neither concrete algorithmic implementation nor benchmarking in the main submission document. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - after l72 (2): could you please define Delta^(dxd) (the set of trace 1 PSD matrices)? - In (4) what is h? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: There is no limitation section provided by the authors in the main submission. The authors do not address any limitation and do not provide any further research directions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your reviewing efforts. We are happy to hear that you found our paper well-written and interesting, and that you felt it was understandable for non-experts. We would like to note to the reviewer that the format and scope of our paper, which addresses a fundamental theoretical problem of interest in machine learning applications, is similar to many of the theoretical papers which have appeared in past NeurIPS conferences, and we believe it is a good fit for the venue. If the reviewer has specific concerns about the correctness of any portion of our paper, we are of course happy to answer any questions or provide an explanation. Our proofs of correctness are self-contained and provided in full in the supplement. Regarding practical implementations, our main motivation in this work was clarifying and improving a theoretical tool, area convexity, which has found use in practical problems we believe are of value to the NeurIPS community (optimal transport, matching, min-mean-cycle, etc.), and showing how the same tool extends to solve semidefinite programming variants. While promising empirical evaluations of area convex algorithms for solving these applications on real-world examples have been conducted in prior works (see e.g. [JST19]), we agree that an important next step is to benchmark our improved area convex algorithm against them. The set name $\Delta^{d \times d}$ is reserved for the spectraplex, and the function name $h$ is reserved for entropy; both are defined in Section 1.3, but we will make this more clear when they appear. We agree that the inclusion of $\Delta^{d \times d}$ in (2) before it is defined is an oversight which we will address; thank you for pointing this out. Regarding limitations and extensions, we give a brief outline of how future work can improve Theorem 2 by using low-rank sketching tools after its statement (with an extended discussion in Section 1.2 of the supplement), but agree this can be more explicitly stated. We also believe Theorem 1 may be optimal up to constant factors due to the lack of progress in the historically easier simplex-simplex case (see Footnote 6), which Theorem 1 matches; however, proving a formal lower bound is an important future endeavor. Moreover, area convex algorithms appear to be specialized to bilinear settings such as linear and semidefinite programming currently; finding nonlinear generalizations (as suggested by another reviewer) would be an exciting extension of these tools. Finally, finding more applications and improving practical implementations of our algorithms are important extensions of this line of research. We will add these points in a revision of our paper; thank you for raising them. We hope this discussion was clarifying to the reviewer and improves your opinion of our paper. --- Rebuttal Comment 1.1: Comment: Thank you for your responses, I will retain my score.
Summary: The authors study first-order algorithms for box-simplex games, a special kind of two-player zero-sum bi-affine constrained games. The constraints of these games dictate that the first player selects an action represented by a vector within the n-dimensional box ($[-1,1]^n$), while the second player chooses an action defined by a vector from the d-dimensional simplex ($\Delta_d$). The authors show an algorithm for the computation of an $\epsilon$-approximate saddle point of a box-simplex bi-affine game, expressed as $min_{x\in[-1,1]^n,y\in\Delta_d} x^T A y^T - <b,x> + <c,y>$, in time $O(nnz(A)\cdot L \cdot\log(d)/\epsilon)$, where L is the maximum 1-norm of any column of A, and nnz(A) denotes the number of non-zero entries in matrix A. They advance over the previously best-known first-order method by Sherman, removing logarithmic terms. Box-simplex games has some important applications, and the authors' algorithm introduces a new state-of-the-art, including applications like approximate max flow and optimal transport. In the latter part of the paper, the authors broaden their focus to include a new class of two-player games: box-spectraplex games. This is a generalization of box-simplex games where one player still picks an action in a n-dimensional box ($[-1,1]^n$), but the other player chooses as an action, a positive semi-definite matrix Y with trace equals to 1. An applications of these kind of games include spectral sparcification. The authors present a first-order algorithm to computing an approximate saddle point of a box-spectraplex game. After rebuttal: The author's rebuttal successfully addressed my concerns, and as a result, I have chosen to maintain my original score. Strengths: This paper studies an interesting class of games with important applications, thereby contributing to the current state-of-the-art algorithm. Moreover, the authors novel reinterpretation of the area-convexity as the more standard relative smoothness allows for the use of more conventional analysis methods, which is interesting on its own. I looked at the statement of the lemmas and they seem reasonable. I only verified the proofs in the main body. Weaknesses: In the spirit of constructive criticism, I believe that the writing could be improved for better readability. Currently, the appendix appears more like a journal version of the paper rather than serving its intended purpose. A few recommendations to enhance the paper include: - On line 63, defining the notation nnz(A) and $||A||_{1\rightarrow 1}$ earlier might improve clarity. - On lines 105-16, it could be beneficial to define the operator $T_{mv}$ sooner. - Several references to 'outer' and 'inner loop' throughout the paper lacked clarity. Lines 38, 215, and 224 are examples of this. Finally, the move towards box-spectraplex games seems to be more theoretical than practical, with limited application examples. Please see the question section. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Could the authors provide more examples of problems that could be simplified to box-spectraplex games? I was a bit confused in the proof sketch of Theorem 1. For the telescoping sum to be valid, it appears necessary that $\bar{y}_{t+1}$ must equal $\bar{y}^+_t$ (where $\bar{y}_t$ represents the input to the extra gradient step-oracle at step t, and $\bar{y}^+_t$ indicates the output of the extra gradient oracle at step t). However, upon reviewing Algorithm 3 in the supplementary material, different notation seems to be used. Could the authors provide clarification on this? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors list any assumptions for their theorems, and I do not believe this work can have negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful feedback; we appreciate that you found both the problems we study and our insights to be interesting. We agree with your suggestions on presentation, and discussed some potential directions for incorporating them in the global response. We are happy to incorporate any of your further suggestions. We also agree with your notation and naming convention suggestions, and will address these in a revision. Regarding Theorem 1, indeed, $\bar{y}_{t + 1}$ is set to be the third argument of the output of Algorithm 2 (denoted $\bar{y}^+$) in each iteration. In Algorithm 3 in the supplement, we performed this computation explicitly in terms of the inputs of Algorithm 2, which can be verified to be consistent. However, we agree it would be more clear to have a “conceptual” variant of Algorithm 3 which simply explains how the points in the analysis of Theorem 1 are derived from the abstract oracles in Definitions 4 and 5 of the main body, and will provide this in a revision. Regarding further applications of box-spectraplex games, we note that the community has previously not provided any nontrivial algorithms for this problem beyond black-box applications of non-tailored methods (or even defined it explicitly to the best of our knowledge), likely due to the lack of progress even in the box-simplex case until recently [She17]. We find it encouraging that recent work [JRT23] was already able to use our result to obtain improvements for a fundamental problem, and are hopeful that other previous applications of positive SDP solvers (as outlined in Page 3 of the main submission) may also be amenable to our tools. At the moment we do not have other examples of applications of box-spectraplex games, but we believe this is outside the scope of our submission (whose goal was to provide the first near-linear time solver), and consider this an exciting direction opened up by our result. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification. I'm pleased with your response and will retain the score I've assigned.
Summary: This paper looks at the bilinear min-max problem Area Convexity framework proposed by Sherman through the lens of Relative Lipschitzness condition of Cohen et al. which relates to the standard Bregman analysis in mirror-type algorithms in first-order optimization. Leveraging this connection, they provide the standard analysis of two-timescale Extragradient method for area convex regularizer based Bregman divergence. Tuning a parameter in the analysis and using the structure of the area convex regularizer, they implement an approximate version of Extragradient where each approximate iteration requires O(nnz(A)) computations. This allows them to remove the log(1/\eps) factor Sherman incurred in the analysis of inner loop. The extend similar result to the case where Strengths: The paper had a new ideas. E.g., tuning parameter alpha/beta in the regularizer was critical step that allowed for approximate implementation as done in Line 5-9 and Line 12-17 of Algorithm 3. Weaknesses: The main weakness of the paper is in its presentation. The way main paper differs from the supplementary almost feels like reading a new paper with different orders, equation numbers etc. Also, there are concepts introduced which may not be directly relevant to the problem at hand: E.g. Definition 1 is not used throughout the main paper. I had to dig until Appendix B to understand why there was a fuss about it on the very first technical section, i.e., Section 2.1. The paper borrows key ideas developed by Sherman (which are popular by now) and some previous developments such as Lu et al, and Cohen et al. In particular, the idea of Area Convexity is very much exclusively used for bilinear games and there has been no major movement towards general convex-concave problems. In that sense, this is an incremental work in my opinion. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: My questions below are referring to the supplementary material (or actual paper). 1. It would be best if you can explain how steps 7,8,9 relate to steps in Algorithm 1 with exact parameters used for it. Leaving it as "By observation" may not work for large fraction of readers. Same goes for steps 14-17. It would also be worthwhile to remind the reader that h is the entropy divergence function used here. 2. The proof of Lemma 6 needs to deliberate more on what F to use define f(y-hat) in (13) such that Lemma 4 can be applied. 3. Use x_br for two different purposes in the same section is a bad notation. In particular, Lemma 4 defines x_br as argmin F(x,y) for F defined Section 2.2 of the main paper (again, I could not find it in the supplementary paper). Is this the same used in Lemma 6? 4. Remind the reader about H at the beginning of Section 4. 5. In the Proof of Lemma 8, you use Theorem 3.3 of LS01. It would be better if you provide the result as there is already quite a bit of jumping around while reading the paper. I haven't been able to verify the claims in the proof of this lemma. 6. I have read some of the references in the previous paper of this area. Can you also comment BSW19 paper? As I understand, they also implement Shermans MWU algorithm for inner subproblems and hence incur log(1/\eps) additional factor similar to Sherman or JST19. 7. Can you also compare the result for box-spectraplex case with BBN13, AL17, CDST19? In particular, it will be better to state clearly in what regimes Algorithm in Section 4.4 would be better than box-spectraplex variations of the above papers. BSW19: Faster width-dependent algorithm for mixed packing and covering LPs Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: I am giving this paper borderline accept since it was not very well written and almost seems like reading two papers at a time which borrows/uses existing literature heavily (sometimes without giving context). If properly written, this paper can be improved substantially. But that will require significant rewriting and hence, I don't see how it will be justified in this conference set up. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful reviewing efforts and many helpful comments. We agree with the revisions suggested by your Questions 1, 2, 3, 4, and 5, and will incorporate them. Regarding Question 1, in the main body of our revision, we will include an “abstract” variant of Algorithm 3 in the supplement which calls Definitions 4 and 5 directly and precisely states what parameters are used in the calls (these parameters are stated in the proof of Theorem 1, though we agree they can be made more explicit). Regarding Question 2, the function $F$ in the application of Lemma 4 is the joint function of $x$ and $y$ defined in (11) of the supplement (before minimizing over $x$), but we will clarify this. Regarding Question 3, we agree the overloaded notation is confusing and this will be clarified; the use of notation $x_{\text{br}}$ in Lemma 6 is consistent with the use in Lemma 4 with a specific choice of $F$. Regarding Question 5, we will explain in self-contained terms the result from [LS01], which is that the gradient of a spectral function (on matrices) agrees with its vector gradient on the eigenvalues. Regarding Question 6, we agree that [BSW19] is a relevant paper to this line of work and will include a citation, and that it suffers from the same logarithmic overhead in runtime due to its reliance on alternating minimization as in [She17]. In the process of completing this work, we actually reached out to the authors of [BSW19] because they did not include a proof of correctness that their alternating minimization converges, which we had some doubts about because their setup has different properties than [She17]. They have not yet gotten back to us with a proof of correctness, so we were unsure how to address this point in the submission. Regarding Question 7, consider the problem of solving a box-spectraplex game with Lipschitz constant $L$, as defined in Theorem 2 of our paper. Compared to [BBN13, AL17, CDST19], our algorithm incurs an overhead of $L/\epsilon$ in the runtime, but saves a factor of $d^2$ (the dimension of the matrices) due to the size of $\ell_1$-strongly convex regularizers over the $\ell_\infty$ ball (the box, as opposed to the simplex which [BBN13, AL17, CDST19] are tailored to handle). By using “half-regularized” strategies as discussed in the introduction of [She17], we believe this $d^2$ overhead can be improved to a $d$, but the analysis of low-rank sketches used in [BBN13, AL17, CDST19] becomes more challenging. In summary, we lose a $L/\epsilon$ factor but save at least a $d$ (and possibly a $d^2$ factor) over these methods, which is favorable in the setting where first-order methods are preferred over e.g. interior point methods in the first place (when $L/\epsilon$ is small compared to $d$). We acknowledge your concerns on readability and will take steps to address them, summarized in our global response. We believe our conceptual contributions and algorithmic improvements can be understood from the shortened version of our paper, and thus our work is appropriate for publication at NeurIPS; we hope that aligning our presentation more closely between the shortened version and the supplement will help bridge the gap in readability. Regarding the incrementality of our results, we conjecture our Theorem 1 is optimal within a constant factor as it matches state-of-the-art deterministic algorithms in the easier simplex-simplex setting (which have not been improved in 20 years) under a more challenging problem geometry; we hence believe there is further merit in closing this gap. Finally, the tools we introduce to obtain our result generalize readily to the box-spectraplex setting, overcoming obstacles from prior analyses as explained in Section 2.2. We hope this discussion clarifies and elevates the merits of our paper to you. Thank you again for all the detailed feedback from a reviewer familiar with the line of work on area convexity. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. After carefully reading your response, I have decided to retain my score.
Summary: In this paper, the authors consider box-simplex games, a bilinear min-max optimization problem with box and simplex constraints. The current best-known method for solving such problems is Sherman's algorithm, which is based on the concept of "area convexity". The key insight of this work is to reinterpret area convexity as a more general notion of relative Lipschitzness in the optimization literature. Using this new perspective, they streamline the proof of Sherman's algorithm, propose an improved subproblem solver that eliminates the additional log overhead, and further extend the algorithm to box-spectraplex games. Strengths: - The authors show that area convexity implies a Bregman divergence bound similar to relative Lipschitzness, which is a simple yet insightful observation. I appreciate that it better clarifies the connection between Sherman's algorithm and the classical extragradient methods, making it conceptually easier to understand and extend. - The proposed subproblem solver improves a $\log(1/\epsilon)$ factor from the original Sherman's algorithm. As a corollary, it leads to the state-of-the-art runtime complexity for optimal transport among first-order algorithms. Weaknesses: I only wish the presentation of the paper could be more clear. In particular, the proposed extragradient algorithm is not fully described in the main text, but only implicitly in the proof of Theorem 1. It would be better if the authors can present the pseudocode and better explain the steps in the algorithm. Also, it is a bit confusing that the supplement is an extended version of the paper rather than the appendix, which makes it a bit hard to track down the proof. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - In Definition 4, the notation $V_z^{\alpha}$ is undefined. - If I understand correctly, both the GradStepOracle and XGradStepOracle are designed for solving the same kind of subproblem (at the bottom of page 6). Could you explain why they are implemented differently? And how are they related to the algorithm in [LFN18]? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors addressed the limitations by specifying the considered problem class. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your encouraging comments, and we are glad that you found our technical contributions insightful. We agree with your suggestions regarding the supplement, and outline some directions for improvement in the global response (though we are happy to incorporate any further suggestions as well). Regarding pseudocode for Theorem 1, our exclusion was largely due to space restrictions (in particular, Algorithm 3 in the supplement contains full pseudocode for our algorithm). However, we will provide a more explicit explanation before the proof for clarity. To our understanding the final conference version would have an additional allowed page, so if our paper is accepted we will use it to add clarity to this section of the paper. Thank you for pointing out our oversight in Definition 4, which we will also address in a revision. Regarding your question on asymmetry in the steps of our method, this reflects an asymmetry in the analyses of extragradient methods. In these, the regret is bounded for the “gradient oracle” points, but the regret upper bound is stated in terms of the divergences of the “extragradient oracle” points (which our inexact oracles need to compensate for). Our oracle implementations are not directly related to the algorithm in [LFN18], which was designed for iterative convex minimization (as opposed to our method, a one-shot subproblem solver for a minimax optimization problem), but certainly our analysis is inspired by theirs through the commonality of the tool we use (relative smoothness). We will add this discussion for clarity in a revision as well. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. Please make sure to incorporate the changes in the revision.
Rebuttal 1: Rebuttal: Reviewers Wg5p, E7hD, and Vb7x asked about inconsistencies between our main submission and supplementary material which hindered readability. We thank you for raising this important concern and completely agree; we will make efforts in a revision to make the two more consistent. Specifically, after all deferred proofs or contents in the main submission, we will add an explicit pointer to where the corresponding part of the supplement can be found. We will also order the two parts in a more compatible way (trying to preserve theorem names, etc. whenever possible), and make the main submission more self-contained by eliminating unused parts.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper explores the relationship between area convexity and extragradient methods, and provides improved solvers for subproblems required by variants of the algorithm. The paper also presents a state-of-the-art first-order algorithm for solving box-simplex games and a near-linear time algorithm for a matrix generalization of box-simplex games. The authors demonstrate that their algorithms improve runtimes for combinatorial optimization problems and provide new insights into numerical linear algebra. The contributions of the paper include the development of efficient algorithms with improved convergence rates and computational efficiency, as well as the application of these algorithms to various optimization problems in fields such as optimal transport and min-mean-cycle. Strengths: Through a deeper understanding of the relationship between box-simplex games and matrix generalization problems, the authors propose improved solvers that leverage the lens of relative smoothness and convex analysis to design efficient algorithms with faster runtimes. These proposed algorithms not only improve runtimes for combinatorial optimization problems but also provide new insights into numerical linear algebra and have applications in various combinatorial optimization problems such as approximate maximum flow, optimal transport, and min-mean-cycle. In addition to its technical contributions, the paper is well-written, well-structured, and effectively communicates its main ideas and contributions. The authors present a rigorous analysis of the proposed algorithms, complete with detailed proofs, explanations, and appropriate use of mathematical notation. The inclusion of appendix sections, examples, illustrations, and a comprehensive literature review further enhance the quality, readability, and accessibility of the work. Overall, the strengths of this paper lie in its thorough exploration of area convexity, its innovative use of relative smoothness, and the development of improved algorithms for solving optimization problems. Weaknesses: 1. The author claims to have improved the runtime for the Box-simplex problem, resulting in increased performance of related applications. However, a detailed comparison with previous results is not provided. Providing such a comparison would help readers better understand the level of improvement and contribution of the paper. 2. The paper mentions the use of a proximal oracle in Algorithm 6, but more details on its implementation and computational complexity would be helpful. It would be valuable for the authors to discuss how the proximal oracle contributes to the overall efficiency of the algorithm and whether there are any limitations or challenges in its practical implementation. 3. The paper could benefit from experimental evaluation to validate the proposed approach. Conducting experiments on real-world or synthetic datasets would provide empirical evidence of the effectiveness and efficiency of the alternating minimization scheme. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Regarding Weakness 1: The author claims to have improved the runtime of multiple applications by solving the Box-simplex problem. However, a detailed comparison with previous results is not provided. Providing such a comparison would help readers better understand the level of improvement and contribution of the paper. Regarding Weakness 2: The paper mentions the use of a proximal oracle in Algorithm 6. Could the authors provide more details on the implementation and computational complexity of this oracle? It would be helpful to understand how the proximal oracle contributes to the overall efficiency of the algorithm and whether there are any limitations or challenges in its practical implementation. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The paper does not discuss the limitations and any potential negative consequences of their work. However, since their algorithm is focused on optimization problems, the impact on society would largely depend on the specific applications and use cases. A brief discussion on the potential real-world implications and ethical considerations would provide valuable context for readers to better understand the broader implications of the research. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful reviewing efforts. We are glad that you found our technical contributions interesting and our paper easy to read. Regarding prior box-simplex game solvers, our paper is most directly comparable to [She17], which it improves upon by a logarithmic factor in the runtime by removing the need for high-accuracy alternating minimization. The [She17] result was viewed as a breakthrough in the optimization community (and has not been improved since prior to our work), as previously for box-simplex games, first-order algorithms incurred an additional overhead of $\min(\epsilon^{-1},\sqrt{d})$ in the iteration count. Higher-order algorithms such as interior point methods also incurred polynomial overhead in runtime due to the need to solve linear systems. We agree this comparison is valuable context and not explicit, and will add it in a revision. We also note that we summarize our new results for applications in Section 5 of the supplement, though there too we will add a more explicit comparison to prior work (in all cases, we remove an extraneous log factor). While the improvement of Theorem 1 over [She17] is somewhat modest, it improves the runtime of box-simplex games to be comparable to state-of-the-art deterministic algorithms for the easier simplex-simplex setting (which have not been improved in 20 years) under a more challenging problem geometry, so we conjecture it is optimal up to constant factors; we believe closing this gap has additional merit. Finally, the tools we introduce to obtain our result generalize readily to the box-spectraplex setting, overcoming obstacles from prior analyses as explained in Section 2.2. Regarding Algorithm 6, we note that this is just a conceptual framework for extragradient algorithm design we introduce, and for specific problems care must be taken to handle any inexactness in the proximal oracle implementation. For the box-simplex and box-spectraplex game applications we study, we follow the framework of Algorithm 6 and provide end-to-end inexactness analyses and (near-linear) runtime guarantees for our results. Regarding practical implementations, our main motivation in this work was clarifying and improving a theoretical tool, area convexity, which has found use in practical problems we believe are of value to the NeurIPS community (optimal transport, matching, min-mean-cycle, etc.), and showing how the same tool extends to solve semidefinite programming variants. While promising empirical evaluations of area convex algorithms for solving these applications on real-world examples have been conducted in prior works (see e.g. [JST19]), we agree that an important next step is to benchmark our improved algorithm against them. We hope this response was clarifying, and elevates your opinion of our paper. Thank you again for your helpful suggestions, and we will take care to incorporate them in future versions. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. I have carefully evaluated your response, as well as the feedback provided by other reviewers. After careful consideration, I have decided to maintain my current score.
null
null
null
null
null
null
Rethinking the Role of Token Retrieval in Multi-Vector Retrieval
Accept (poster)
Summary: The authors improve multi-vector retrieval to move beyond the standard retrieve-gather-score stages of ColBERT. In particular, they modify the objective function during training as well as the scoring mechanism so it doesn't require gathering all token vectors of each candidate document before the final scores are computed. The authors begin by showing that, in ColBERT, the standard cross-entropy loss applied over the aggregated scores fails to reduce token-level scores if the average score is low. This can reduce the precision of the initial "retrieve" stage, putting a higher burden on the gather/score stages. To tackle that, the authors simulate the token retrieval stage during training by masking/skipping document token scores for tokens that aren't among the nearest top_{k_{train}} to a given query token within the training batch. At search time, the authors use exclusively the retrieved (i.e., close) tokens from candidate documents. Scores of "missing" tokens are imputed/estimated with a lower bound from the kNN retrieval stage. Strengths: The paper is very strong overall. It tackles a well-defined problem with a well-motivated and novel solution. I can see this being widely applicable to multi-vector / ColBERT-like retrievers. The analysis and rich results are very strong as well, considering the paper doesn't use distillation from cross encoders. (The paper does use hard negatives, though --- from RocketQA). Weaknesses: Despite the very strong contributions, a number of claims of the paper are needlessly inflated (or unsupported). First, the abstract (and paper) say that XTR "enables a newly designed scoring stage that is two-to-three orders of magnitude cheaper than that of ColBERT". The basis of this claim is a theoretical FLOPs analysis of one of the three scoring stages of ColBERT against XTR. To the best of my knowledge, this appears, however, unsupported or otherwise inflated on a few dimensions: 1) This comparison doesn't count the actual FLOPs used. It also doesn't measure the latency of the scoring stage or the full pipeline. Even though these measurements may be less idealized than theoretical analysis, not reporting them makes it much harder to assess the proposed methods. 2) While XTR appears to make the scoring stage essentially free (at least based on the FLOPs analysis), it is unclear how much this affects the total computational cost of retrieval. Is the latency now low? Is it lower than existing multi-vector retrievers? The paper offers no empirical insight on this, but it's unlikely (or, rather, impossible?) that XTR is 2-3 orders of magnitude faster than them overall. For instance, ColBERT retriever in PLAID (2022) seems to report latency of 58 ms (or lower) per query on MS MARCO. Two orders of magnitude faster than that would be 0.6 milliseconds per query, which is much faster than even basic BM25 retrieval permits. 3) The comparison is conducted against the original ColBERT (2020), but this would appear to ignore years of optimizations for multi-vector scoring as in ColBERTv2 (2021), PLAID (2022), and other papers like ColBERTer (2022), etc. For instance, the PLAID ColBERTv2 retriever appears to show that candidate scoring (and the corresponding index lookup) is an extremely cheap step. This may be because PLAID moves a lot of the work to earlier stages, making them more expensive instead(?). However, there is very minimal engagement in the paper with the concerns of these developments in terms of faster candidate generation and/or much smaller index. While this isn't the focus of the paper, it's important to (at least) report the index size, how much of it needs to reside in memory, and how fast the overall retrieval process can be for XTR. Second, the abstract reports that "on the popular BEIR benchmark, XTR advances the state-of-the-art by 2.8 nDCG@10 without any distillation." It's true this is achieved without any distillation, but such a result is only achieved with the XXL (11B or 5.5B?) T5 encoder of XTR, which is orders of magnitude larger than related methods. This isn't an issue with the evaluation itself, but it shouldn't be glossed over. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: What is the intuition for the normalizer Z during training? How essential is this for the results? How robust are the analyses with the T5-ColBERT, compared to the original ColBERT and ColBERTv2 models? ColBERT typically has a query expansion stage that appears to be skipped in XTR. How does this affect the analysis and results? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: See Weaknesses & Questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review. **A1. This comparison doesn't count the actual FLOPs used. It also doesn't measure the latency of the scoring stage or the full pipeline.** As the reviewer summarized, our goal of the paper is to simplify the three-stage inference of ColBERT while making the scoring stage very efficient. Also, our claim on the FLOPs improvement is over the scoring stage, not over the entire pipeline. The apples-to-apples comparison of all baselines is difficult due to the differences in libraries (e.g. FAISS vs ScaNN, pytorch vs jax) and implementations depending on hardware constraints (e.g., multiprocessing, RAM usage). Nevertheless, for the sake of understanding the actual latency, we summarize the latency of our current implementation of XTR in the general response G1. We also show how much memory is needed to compute the full sum-of-max for T5-ColBERT. Please note that optimizing the latency of the entire pipeline is not our immediate goal as it requires lots of engineering efforts, which will depend on the given resources and algorithms. We will make this clearer in the paper. For comparisons against PLAID and ColBERTer, please see our general response G2. **A2. It's true this is achieved without any distillation, but such a result is only achieved with the XXL (11B or 5.5B?) T5 encoder of XTR, which is orders of magnitude larger than related methods.** In the abstract, we will properly tone down the state-of-the-art message and focus more on the contributions of XTR. We use the encoder part of T5-XXL so it is about 5B. **A3. What is the intuition for the normalizer Z during training? How essential is this for the results?** The normalizer Z provides stable losses during training. Since each document has a different number of mini-batch tokens retrieved, removing the normalization would give very high scores on positive documents, making the training process unstable. Not using Z (or setting Z to a constant) does not perform well compared to our proposed version of Z. **A4. How robust are the analyses with the T5-ColBERT, compared to the original ColBERT and ColBERTv2 models?** While most of our analyses were based on T5-ColBERT, we do acknowledge that there is a difference between T5-ColBERT and ColBERT (or v2) including query expansion and cross-encoder distillation. Based on our theoretical analyses in Appendix A, which uses the sum-of-max operator, we believe that our findings would hold for existing ColBERT variants even if they use the query expansion. More empirical comparisons against ColBERT could be needed to better support our hypothesis in the future. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I think XTR is a powerful contribution conceptually and from a modeling standpoint. Because of that, I am willing to keep my score if the authors promise to tone down two claims in the paper pertaining to: 1. [from A2] "we will properly tone down the state-of-the-art message and focus more on the contributions" 2. [from A3] "Not using Z (or setting Z to a constant) does not perform well compared to our proposed version of Z" 3. [from A1] "our goal of the paper is to simplify the three-stage inference of ColBERT while making the scoring stage very efficient. Also, our claim on the FLOPs improvement is over the scoring stage, not over the entire pipeline" To expand on this one, A1 is a perfectly reasonable goal, and I agree that this work has achieved that. However, the current abstract, intro, and writing may have left me with the impression that XTR was in fact verified to be faster than (or at least competitive with!) existing ColBERT implementations. If how XTR would interact with systems work on optimizing these models is not even considered, this should be very clear to the reader. In particular, the latency numbers the authors report in G1 are not competitive with any recent realistic implementation of ColBERT-like methods to my knowledge, which generally search in tens or hundreds of milliseconds at most. While I understand that "differences in libraries and infrastructure" are significant, it is critical to be clear in the intro that (1) the author's only implementation of XTR remains considerably less efficient in latency and in storage than existing work, (2) in the author's tests of latency the scoring phase is 7-8x faster under such and such simplified conditions (which certainly softens the 4000x FLOPs reduction claim), and (3) future work is needed to confirm that the XTR efficiency gains can be realized in practice in a way that complements existing work. --- Reply to Comment 1.1.1: Comment: Thank you for the insightful suggestions! We will tone down the claims in our paper. We will make it clear that the FLOPs improvement we reported is over a vanilla implementation of the scoring phase, which serves as a proof-of-concept. It remains to be studied how XTR can improve efficiency over highly optimized ColBERT implementations from prior work.
Summary: Authors propose a better document retrieval method. They build on top of ColBert and instead of reranking using all tokens of documents retrieved by stage 1(a query token document token retrieval), they just use retrieved document tokens from stage 1 and perform retrieval using them. Strengths: - Results seem to be strong enough. - Do a decent job in ablations, qualitative analysis, complexity analysis. Weaknesses: - The paper overcomplicates a simple concept (Eq 4 can be further simplified from what I understand). Max over j would collapse then. - Given the proposed methodology is a training mechanism, it would be interesting to see how this method performs with training on NQ, TriviaQA, etc. Currently, the authors perform trainning only on MS MACRO and perform testing on NQ, triviaQA, etc. DPR does this. - Very bad writing. - Very handwavy at some places - See Questions for detailed list of weakness. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: - For the choice of m_i, Eq. 4 would have collapsed to the form of Eq.1 with extra Ai. Why show it in this form and complicate? Isn't Max over j the same as Aij(qi.Tdj)?? - Line 120. How is $f_{colbert}(Q, D^-)=0.2$ when $q_i^Td^-_j>0.8$. Failure case very handwavy. Please explain with proper numbers and reasoning. Is Fig2 different tokens for any +ve/-ve documents or just the retrieved tokens? - Lines 134-137. Very handwavy again. Authors should use proper math format to denote in-batch tokens. It is very difficult to understand when $A_{ij}=1$. - Lines 134-137. Define top-k_train - For the document tokens and query tokens already in RAM, why not compute the pairwise similarities computed if Aij=0? This would still retain some speed? - Lines 226-241: what is token at rank k? are these top k retrived tokens? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: The authors discuss the limitations. Given that the work focusses on the training mechanism, it would have been nice to see trainings with other datasets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review. Please read our clarification for any misunderstanding you might have had while reading the paper. **A1. The paper overcomplicates a simple concept (Eq 4 can be further simplified from what I understand). Max over j would collapse then.** First of all, the definition of the alignment matrix $A$ is different in Eq 1 and Eq 4 as described in our paper. Eq 1 shows the baseline sum-of-max operator and how the alignment matrix $A$ can be used to describe its algorithm. Here $A_{ij} = 1_{j=\text{argmax}(P_{ij^\prime})}$ where argmax is over $1\leq j^\prime \leq m$ (i.e., single document) and $P_{ij}=q_i^\top d_j$, meaning that $A_{ij}$ is 1 when $P_{ij}$ is the maximum value among $P_{ij^\prime}$ otherwise 0. Hence, we can replace max over $j$ with $A_{ij}$ in Eq 1. On the other hand, Eq 4 shows the approximated sum-of-max for XTR. In XTR, $A_{ij} = 1_{j \in \text{top-k}(P_{ij^\prime})}$ where the top-k is over $1 \leq j^\prime \leq mB$, meaning that $A_{ij}$ will be 1 when $P_{ij}$ is within the top-k retrieved values among all the $P_{ij^\prime}$ within a mini-batch of $B$ documents. The top-k operator here returns the top-k indices (which will be $k_\text{train}$ during training and $k^\prime$ for inference) among the $j^\prime$s. The main difference with Eq 1 is that the $j^\prime$ spans over all tokens from mini-batch documents in Eq 4, not just from a single document as in Eq 1. As a result, unless all $A_{ij}=1$ (which is unrealistic), Eq 4 will be different from Eq 1 and cannot be collapsed. We hope that our mathematical formulation can provide proper clarification for your concerns. We will add this clarification in the paper and use a different notation for the alignment matrix of XTR (e.g., $A^\text{XTR}$) to prevent any confusion. **A2. it would be interesting to see how this method performs with training on NQ, TriviaQA, etc.** It is a common practice to use MS-MARCO for training retrieval models in many recent works. Indeed, Ni et al., 2021 showed that using MS-MARCO transfers better than using NQ when evaluated on the BEIR benchmarks. Most of the recent papers on neural retrieval were all trained on MS-MARCO, such as ColBERT, ColBERT v2, SPLADE v2, ColBERTer, PLAID, etc. **A3. For the choice of m_i, Eq. 4 would have collapsed to the form of Eq.1 with extra Ai. Why show it in this form and complicate? Isn't Max over j the same as Aij(qi.Tdj)??** In our paper (176-177), we discuss that Eq 4 would degenerate to Eq 1 when every $A_{ij}$ of Eq 4 is $1$, but this is an unrealistic special case. Based on our description of $A_{ij}$, Eq 4 does not collapse to Eq 1. Please see our response A1 on the description of alignment matrices, which we believe would resolve some of the reviewer’s concerns. **A4. Line 120. How is f_colbert(Q,D−)=0.2 when qiTdj−>0.8. Failure case very handwavy. Please explain with proper numbers and reasoning. Is Fig2 different tokens for any +ve/-ve documents or just the retrieved tokens?** it is possible to have $f_\text{colbert}(Q,D−)=0.2$ when $q_i^\top d_j \rightarrow 0.8$ because the document-level score (e.g. $f_\text{colbert}$) is an average score of token-level pairwise scores (i.e., $q_i^\top d_j$). For instance, imagine there are 6 tokens with $q_i^\top d_j=0.1$ and one token with $q_i^\top d_j=0.8$. $f_\text{colbert}(Q,D-)$ would become (0.8 + 0.1*6)/7=0.2. The failure case example is presented for the sake of intuitive understanding of the motivation. For a more formal understanding of the failure case, we also provided a theoretical analysis of the sum-of-max operator in Appendix A. We will bring some of these results into the main text to make it less confusing. **A5. Lines 134-137. Very handwavy again. Authors should use proper math format to denote in-batch tokens. It is very difficult to understand when Aij=1.** We believe that our description in A1 would resolve this concern. **A6. Lines 134-137. Define top-k_train** $k_\text{train}$ is a hyperparameter that decides how many in-batch tokens should be retrieved. We will clarify this in the paper. **A7. For the document tokens and query tokens already in RAM, why not compute the pairwise similarities computed if Aij=0? This would still retain some speed?** If the token vectors are already loaded on RAM, computing all $A_{ij}$ can be done pretty fast But we cannot simply assume token vectors are already in RAM. In fact, loading/storing the many token vectors in RAM has been one of the most expensive parts of multi-vector retrieval, and how to make it cheaper is the main problem studied by XTR as well as many prior works (ColBERT v2, PLAID, etc). The main advantage of XTR is to completely remove the need of storing any token vectors on RAM and score each document solely based on the retrieved tokens and their scores. While this can significantly reduce the RAM requirement for ColBERT scoring, our general response (G1) on the actual latency also shows how much time we can save with RAM vs Disk-based similarity computation as well as with the XTR-style scoring. **A8. Lines 226-241: what is token at rank k? are these top k retrived tokens?** Yes, the token at rank k means the k-th retrieved token. We will clarify this in the paper. --- Rebuttal Comment 1.1: Comment: Most Answers are convincing. At a very high level, you are just using the document tokens retrieved in the first retrieval stage and interpolating document retrieval scores. - By the same logic from lines 115-132, your inference mechanism in Eq. 4 would weigh any D^- much higher than it actually is. This may cause problems. Reason I was asking for training results on other datasets is that this training/inference mechanism you propose might be very specific to MS-MACRO. Agreed it transfers to BIER zero shot better but I'd expect for a training mechanism to work with other datasets. What are your thoughts on this? - Can you do this additional experiment of doing inference with Eq 4 on Colbert(i guess you have trained colbert model checkpoints)? I want to better understand the effectiveness of your training mechanism. Take your time on this. Will make sure to check back frequently. Will consider increasing score after your response to this. --- Reply to Comment 1.1.1: Comment: Thank you for reading and considering our responses! ### Q1: Is XTR's training mechanism specific to MS MARCO? Good question. We did an additional experiment training XTR and T5-ColBERT on Natural Questions (NQ). We train with hard negatives from [1]; training and inference used the same hyperparameters as what we originally used for MS MARCO. Model | Training data | BEIR NQ NDCG@10 ---|---|--- DPR (number from [2]) | NQ | 0.474 T5-ColBERT-base | MS MARCO | 0.52 XTR-base | MS MARCO | 0.53 T5-ColBERT-base | NQ| 0.27 **XTR-base** | **NQ** | **0.56** For T5-ColBERT, we are not able to achieve good performance when training it on NQ. XTR, on the other hand, achieves strong results. It suggest our training mechanism is not specific to MS MARCO. [1] Ni, Jianmo, et al. "Large Dual Encoders Are Generalizable Retrievers." Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. 2022. [2] Thakur, Nandan, et al. "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models." ### Q2: Experiment of doing XTR inference (Eq 4, f_xtr’) on ColBERT This is indeed an interesting ablation. We reported this experiment in Table 5 of our paper; below is a summary. We trained T5-ColBERT which is ColBERT with T5 as the backbone, then ran inference with XTR’s scoring function (f_xtr’ in Eq. 4). Model | inference | MS MARCO MRR@10 | MS MARCO Recall@1k --- | --- | --- | --- T5-ColBERT-base | sum-of-max (*computationally expensive*) | 38.8 | 97.8 T5-ColBERT-base | f_xtr’, no imputation | 0.0 | 0.0 T5-ColBERT-base | f_xtr’, top-k’ score | 27.7 | 91.8 XTR-base | f_xtr’, no imputation | 22.6 | 88.7 XTR-base | f_xtr’, top-k’ score | 37.4 | 98.0 T5-ColBERT relies heavily on the sum-of-max scoring, which requires loading and scoring the full document representation. When scoring with the retrieved token (f_xtr’, no imputation) instead of full document representation, T5-ColBERT's accuracy is very low. Our top-k’ imputation improves T5-ColBERT's accuracy significantly, but it is still not as good as XTR. This result, along with the qualitative analysis in Table F.1, suggests that ColBERT's training recipe does not guarantee good token retrieval, thus reranking with full document representation is necessary. On the other hand, XTR is trained for token retrieval, so scoring with retrieved tokens alone(f_xtr’) can already give reasonable results. Adding the top-k’ score imputation, XTR can be as accurate as T5-ColBERT without the need to look up and score the full document representation.
Summary: This paper deals with the problem statement of document retrieval. First, the paper contrasts and explains the differences between single vector and multi-vector retrieval models. As multi-vectors retrieval models perform better due to their accessibility to more tokens, it involves significant inference costs. The authors propose a new model Contextualized Token Retriever XTR which aims to overcome the disadvantages during inference and bring closer to single vector models. Experimental results on BEIR and LoTTE benchmarks show that XTR achieves State-of-the-art results. Post rebuttal: I have read the author's response and the rebuttal sufficiently addressed my concerns. Strengths: Originality: The paper brought a lot of theoretical analysis showing why and where multi-vector retrieval models fail. This analysis provided a strong justification to their propose approach. In addition, the authors were able to show the effectiveness of XTR on multiple benchmarks. Significance: Given the gap that exists between single and multi-vector retrieval models in terms of performance vs computational efficiency, this paper bridges the gap by not only making the retrieval efficient but also improving the performance. Hence, I feel this paper is quite significant. Weaknesses: Clarity: I feel this paper needs little fine-tuning in clarity. In the introduction, I couldn't understand the problem statement i.e document retrieval they are dealing with but there was a lot of motivation. I believe adding few sentences about the problem statement makes introduction better. Similarly in the experiments section, there could be separate sub-sections for the datasets, model comparisons and experimental settings. It will makes things more coherent. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: None Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: There is sufficient discussion about the limitations after the conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. **A1. Clarity: I feel this paper needs little fine-tuning in clarity.** We will definitely add a few more sentences to make the problem statement and the experiment sections clearer. For instance, the third paragraph of the introduction has the problem statement where we can add a few sentences for better understanding. Additionally, for a better description of our method, we provided more formal definitions of the alignment matrices in the response A1 to R4.
Summary: The XTR model extends the ColBERT neural IR model by removing one of the efficiency issues: candidate documents (selected using a dense vector index, e.g. FAISS) have to be re-scored by loading as much vectors as there are tokens in the document. The proposed approach simply reduces the number of vectors representing a document. The candidate score is then (supposedly) equal to the final score (since all the vectors are in the FAISS index). The authors show that in practice (on a variety of datasets, include LoTTE, MIRACL and BEIR) performs roughly similarly to ColBERT Strengths: The paper presents a simple method that improves over ColBERT by reducing the number of vectors per document. The simple strategy used in the paper is a good alternative to harder to implement pipelines like e.g. PLAID or ColBERTer. The model performs a little bit worse than ColBERT but improves (in theory) the latency, although the latter is not measured in the paper. The experimental work is well conducted on a variety of dataset, showing the robustness of the proposed approach. Weaknesses: - No experiment measuring the observed latency are reported. While the estimated FLOPs/query is important, seing actual difference on the same hardware would strengthen the message - No comparison with PLAID which implements a lot of strategies to improve the efficiency of ColBERT (which is another way to deal with the problem of the number of vectors in documents). More importantly, another very related work, ColBERTer (CIKM 2022) which also reduces the number of vectors per document using a sparsity-inducing loss, is missing. - Figure 4 should report the recall of gold tokens (rather than the precision) Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Adding a sparsity loss would have been an alternative to the learning scheme: why was this not used since it might give more stability to the process? - What is the real latency time of such a model when the number of top-K vectors increases - is $k_{train}$ the same thing as $m$ in eq. 2? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitations section is a bit too generic (not really related to the proposed approach). See weaknesses for possible issues to report at this level. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. **A1. No experiment measuring the observed latency are reported.** Since it is difficult to reimplement every baseline within our hardware and infrastructure, apples-to-apples latency comparisons are a bit tricky. Admitting the differences in implementations, we report the latency of XTR in our general response G1. To summarize, our implementation of XTR runs reasonably fast while significantly improving the speed of T5-ColBERT, both of which are tested in the same environment. **A2. No comparison with PLAID or ColBERTer** PLAID and ColBERTer optimize the first-stage token retrieval, which is orthogonal to the contribution of XTR (better scoring stage without the gathering stage) and they can be applied to XTR as well. For instance, sparse optimization techniques of AligneR (Qian et al., 2022), which can be more directly compared to ColBERTer as it optimizes the first-stage token retrieval, can be applied to XTR without minimum performance losses. Please also refer to our general response G2 for the detailed comparisons. **A3. Figure 4 should report the recall of gold tokens (rather than the precision)** While Figure 4 reported the precision of token at the k-th rank, we agree that the recall of gold tokens would be interesting to see since it would demonstrate how well the token retrieval serves as the first stage of the multi-vector retrieval. We found that indeed, the token retrieval of XTR has higher token recall (i.e., how many tokens of all gold documents are retrieved?), achieving 0.2517 recall@1000 while T5-ColBERT achieves 0.1047 recall@1000 (TREC-COVID). Note that the overall recall would be very low since the model doesn’t have to retrieve the entire set of gold tokens, which include many stopwords as well. Similar results were observed on MS-MARCO and other BEIR benchmarks, which translate to the superior document-level recall@100 of XTR (Table E.1 in Appendix). We also added figures of the token retrieval recall in the pdf attached in the general response, which we will include in our final draft. **A4. Adding a sparsity loss would have been an alternative to the learning scheme: why was this not used since it might give more stability to the process?** Exploring the sparsity loss was extensively studied in the prior work. In particular, AligneR (Qian et al., 2022) tested the Optimal Transport and L1 regularization techniques to sparsify the pairwise as well as unary alignments. However, while these sparsification techniques make the model decide which tokens of a document are salient, we found that existing sparsification methods do not encourage those salient tokens to be retrieved across other documents. This is primarily due to the fact that sparsification methods work independent of the query representations. **A5. What is the real latency time of such a model when the number of top-K vectors increases** For the real latency of each stage (retrieval and scoring), please refer to the general response G1 where we show some fair comparisons with increasing top-k vectors. **A6. is k_train the same thing as m in eq. 2?** k_train is a hyperparameter that decides which top tokens in a mini-batch should be taken into considerations for the scoring of mini-batch documents. On the other hand, m is the length of a document $D$ (i.e., number of tokens in $D$). --- Rebuttal Comment 1.1: Comment: Thanks for your answers. I appreciate the effort with respect to latency results, I think they are important. I still disagree on the positioning of the paper with respect to ColBERTer or PLAID – XTR also tries to increase efficiency at the cost of effectiveness, although the approach is different. I agree however that some methods could be complementary (although it is far from clear if these would lead to additive improvements). --- Reply to Comment 1.1.1: Comment: Thank you for the discussion! To show that the contributions of XTR can be complementary with one of the baselines, we implemented a sparsity-inducing loss following [2] to prune unimportant token vectors from the document, which is similar to the pruning done in ColBERTer. The results are as follows: Model | % document vectors reduced | MS MARCO MRR@10 | MS MARCO Recall@1k ---|---|---|---| XTR-base | 0% | 0.377 | 0.981 XTR-base, with pruning | 29.2%| 0.371 | 0.980 ColBERTer, without pruning* | 0% | 0.387 | 0.960 ColBERTer, with pruning* | 29% | 0.387 | 0.961 *: ColBERTer results are from Table 3 reported in ColBERTer paper [1]. ColBERTer's MRR@10 is slightly higher than XTR-base, as ColBERTer uses distillation in training, an extra CLS token for document representation, etc. Our results show that one could further reduce 30% document token vectors with ColBERTer’s vector pruning technique, while still using XTR to simplify and speedup the refinement stage, without hurting the quality. This shows how the improvements can be additive. For PLAID, more investigation would be needed since it requires significant changes on the first stage token retrieval, which we leave as future work. [1] Hofstätter, Sebastian, Omar Khattab, Sophia Althammer, Mete Sertkan, and Allan Hanbury. "Introducing neural bag of whole-words with ColBETer: Contextualized late interactions using enhanced reduction." In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, pp. 737-747. 2022. [2] Qian, Yujie, Jinhyuk Lee, Sai Meher Karthik Duddu, Zhuyun Dai, Siddhartha Brahma, Iftekhar Naim, Tao Lei, and Vincent Y. Zhao. "Multi-vector retrieval as sparse alignment." arXiv preprint arXiv:2211.01267 (2022).
Rebuttal 1: Rebuttal: We thank all the reviewers for their thoughtful comments and feedback. Most of our reviewers agree that XTR effectively mitigates the problem of three-stage inference of multi-vector models and provides significant improvements over strong baseline models. Some of the main concerns include latency reports (with comparison against the existing baselines) (R2, R5) and the presentation quality of the paper (R1, R3, R4). **G1. Actual Latency of XTR** Our initial manuscript did not compare the actual latency due to the differences in libraries and infrastructure used for implementing each baseline and XTR. Specifically, XTR uses ScaNN (Guo et al., 2020), which has different optimization techniques and a different distributed system compared to Faiss (Johnson et al., 2019) used by various baselines. XTR also does not use any centroid-based approximation used by ColBERTv2 or PLAID. Nevertheless, as many reviewers (R2, R5) asked for, the actual latency might provide a clearer sense of the contribution of XTR. Under the same environment, we report the latency of the token retrieval as well as the scoring stage, which uses naive CPU-based Numpy without any multi-processing. The token retrieval stage uses CPU-based distributed ScaNN (same for T5-ColBERT and XTR). For T5-ColBERT, document token vectors are loaded for the exact sum-of-max, either from Disk or RAM. MS-MARCO dev set queries were used to average per-query latencies. | Top-k’ tokens | Token retrieval | T5-ColBERT-Disk scoring | T5-ColBERT-RAM scoring | XTR-scoring | XTR total latency | | --- | :---: | :---: | :---: | :---: | :---: | | 1,000 | 0.25 sec | 11.56 sec | 3.13 sec | 0.38 sec | 0.63 sec | | 4,000 | 0.31 sec | 47.05 sec | 12.67 sec | 1.64 sec | 1.95 sec | | 40,000 | 0.47 sec | 8 min 2.09 sec | 2 min 7 sec | 17.17 sec | 17.64 sec | Note that these results are mainly for the fair comparisons against the full sum-of-max baseline (i.e., T5-ColBERT) so we did not use multi-processing. Our implementation of the XTR scoring with multiprocessing actually takes about < 0.3 sec even when $k^\prime=40,000$, hence giving < 0.8 sec in total. The scoring stage of XTR also does not require loading $\mathcal{O}(nk^\prime\bar{m}d )$ floating points from Disk or RAM as T5-ColBERT does, which can range from 450MB ($k^\prime=1,000$) to 18GB ($k^\prime=40,000$) per query. **G2. Comparison against PLAID and ColBERTer** Prior work such as PLAID and ColBERTer focus on different aspects of multi-vector retrieval. PLAID and ColBERTer focus on improving the efficiency of the first stage token retrieval, while XTR aims to greatly simplify the gathering and scoring stages as well as improve the training objective. More specifically, PLAID 1) uses cluster-centroid based approximated search to provide a small number of candidates and 2) looks up the candidate vectors to apply the full sum-of-max over them. ColBERTer proposes to use whole-word representations with more aggressive dimensionality reduction, reducing the cost of first-stage retrieval. XTR, on the other hand, 1) uses a naive MIPS library that provides a large number of candidates and 2) uses the light-weight scoring function, which eliminates index lookup. XTR additionally modifies the training objective offering better token retrieval. We believe that these paradigms are very orthogonal and hope to explore combining these efforts in the future. For instance, it would be interesting to see if the training objective of XTR could improve the candidate generation part of PLAID so that it can further have a smaller number of candidates. **G3. Presentation Quality** While many reviewers (R1, R2, R5) believe that the presentation of our paper is either good or excellent, we acknowledge that there is room for improvement as R3 and R4 suggested. In particular, we provided detailed clarification on each concern of R4 regarding the equations and examples (please see R4-A1), which will be added in our manuscript. For a better introduction, we will add clarifying sentences as R3 suggested (please see R3-A1). We also attached figures related to the question raised by R2-A3. Pdf: /pdf/40682a94e0f8ba767da0688144f13113fd93055e.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work proposes XTR, ConteXtualized Token Retriever, which is a method for multi-vector retrieval with a simple and effective objective function. Comparing to prior work on multi-vector retrieval method ColBERT where all the tokens from query and candidate document needs to be computed in order to calculate the final query/document score, XTR only uses retrieved tokens in the document to calculate the score, which greatly reduces inference time. It also provided analysis on the situations when ColBERT training objective may fail and proposed new objective that can mitigate the issue. In experimental session, extensive experiments haven been done to show the effectiveness of the proposed method. XTR achieves competitive performances with current SOTA multi-vector retrieval models, while being much efficient at inference time. It also achieved SOTA in zero-shot retrieval benchmark BEIR and strong results in multi-lingual retrieval benchmarks. Finally, analysis was presented in section 5 to shed lights on why XTR presents a better token retrieval. Strengths: - It proposes a method for multi-vector retrieval which achieved strong performances on multiple benchmarks and can greatly reduce inference time comparing to prior work on multi-vector retrieval - The proposed method achieved SOTA results on zero-shot retrieval benchmarks and multilingual retrieval benchmarks - It provides insightful analysis on where previous multi-vector retrieval method (ColBERT) failed and proposed new training schema that solve the problem. It also provides examples that validate the proposed methods in qualitative analysis. - It is well written overall with thoughtful analysis and extensive experimentation results Weaknesses: - The XTR method is mainly built on T5 models, without exploration on other architectures. - The analysis on equation (4) can be more elaborate (see Q1). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1: Could you provide more insights on why the upperbound of m_i in equation (4) works? Have you tried other bounds/estimations? Q2: In table 1, how does the estimated FLOPs/query looks like for single vector retrieval methods such as DPR? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - The authors listed that XTR is trained on MSMARCO dataset which may have license issue. - The proposed method only trained on T5 models, and no other type of architecture has been explored. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review on our work. **A1. The XTR method is mainly built on T5 models, without exploration on other architectures.** We mainly used T5 models (encoder-only) since it is easier to scale the architecture (from base to xxl) and has been shown to work well for the initialization of dual encoders such as GTR and Sentence-T5. In particular, GTR and Sentence-T5 have already been shown to work equal or better than BERT-based encoders such as DPR or Sentence-BERT. In the future, we are planning to use stronger and larger pre-trained language models such as PaLM or LLAMA, but using decoder-only LMs as retrievers is out-of-scope in this work. **A2. The analysis on equation (4) can be more elaborate (also related to Q1).** Please see our response A1 to R4 on a more formal description of the alignment matrices. Specifically, we defined the alignment matrix in a more mathematical way giving a clear distinction between Eq 1 and Eq 4. **A3. Could you provide more insights on why the upperbound of m_i in equation (4) works? Have you tried other bounds/estimations?** The success of our upper bound-based approximation tells us that estimating missing value in a query-dependent manner is more important than exactly imputing missing value. As reported in Table 5, constant imputation does not work well. On the other hand, more sophisticated imputation methods such as power-law based imputation, which was omitted from the paper for brevity, did not provide significant improvement despite its complexity. As a result, we use the upper bound-based estimation, which is simple to utilize while robust enough to provide good estimation. We will report the power-law based imputation method in the final version. **A4. In table 1, how does the estimated FLOPs/query looks like for single vector retrieval methods such as DPR?** For a single vector retrieval, FLOPs/query is much lower since it does not have multiple representations of each query and document. Note that single-vector retrieval does not have the scoring stage since the retrieval directly provides final scores and rankings. Assuming sublinear MIPS, dual encoders like DPR/GTR would have $2d \log L$ FLOPs/query for retrieval where $L$ is the number of documents (e.g. $d=768, L=5\times10^6$ for DPR). On the other hand, naive multi-vector retrieval models would have $2nd \log M$ where n is the number of query tokens and $M$ is the number of document tokens in a corpus (e.g. $d=128, n=16, M=1\times 10^9$ for XTR). While our focus in this work is to better optimize the scoring stage in terms of FLOPs/query and the memory usage, previous works such as PLAID or ColBERTer are worth mentioning, which reduce the complexity of the first-stage token retrieval as described in our general response G2.
null
null
null
null
null
null
Normalizing flow neural networks by JKO scheme
Accept (spotlight)
Summary: The paper introduces a novel approach to train continuous normalizing flows/score-based diffusion models that exploits the Wasserstein gradient flow theory and the JKO iterative scheme. The main idea is to approximate the density evolution of the variance-preserving forward dynamics of a diffusion model model (i.e. an Ornstein -Uhlenbeck process) with a series of deterministic transport maps. Under some conditions, the resulting deterministic dynamics is invertible and can then be used to map the stationary distribution back to the (implicitly learned) data distribution. In practice, this is very similar to what is done in ordinary score-based diffusion models, but the networks are trained in a radically different way since with the JKO approach the score is trained indirectly through a deterministic algorithm. Strengths: I find this work exciting. The approach is very original as introduces a brand new way to train score-based models, which is backed by serious theoretical arguments. This both increases my confidence in the algorithm and opens the door for possibly very fruitful connection between several important research areas. The paper is well-explained, although it relies on several complex concepts that could alienate some readers. However, I think that in this case the mathematical complexity is unavoidable and gives added value. I appreciate that the authors not not solely focus on the mathematical framework but also discuss several important issues in the algorithmic implementation. The experimental section is convincing and rather comprehensive. The results on the low-dimensional datasets are good when compared with other state-of-the-art methods. The results on images are less impressive but overall convincing. While, in general, the application to generative computer vision needs to be fully flashed out, I agree with the authors that this goes beyond the scope of this work. Weaknesses: I think that, generally speaking, the paper does not have major weaknesses. However, the part of the experiment section dedicated to image generation would have benefited from having a larger scope, for example by including experiments on CelebA HD and ImageNet. The selection of evaluation metrics should be complemented with FID scores, since this is the standard metric in the image generation literature and it is generally more reliable than the BPD score. Note that I do not think that having a lower FID score than the baseline is an argument for rejection, given the theoretical and algorithmic novelty of the work. Ther exposition would benefit from a more extended section connecting the JKO approach to the standard score-matching diffusion theory. This would greatly help readers coming from the diffusion literature, who in my opinion are one of the most important audiences for this work. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - Can you elaborate on the connections between the JKI iterative training and the score-matching training of diffusion models? It would be informative to discuss differences and similarities and give an overview of the strengths and weaknesses of each approach. - I would like to see the FID scores on MNIST and CIFAR10. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Nothing to discuss. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Please refer to the global response for the questions on experiments of image generation and FID scores. * **(Weakness 3rd paragraph) The exposition would benefit from a more extended section connecting the JKO approach to the standard score-matching diffusion theory. This would greatly help readers coming from the diffusion literature, who in my opinion are one of the most important audiences for this work.** We thank the reviewer for the suggestion. The connection between the JKO approach and the standard score-matching diffusion model is summarized below, and we will add it to the revised manuscript. First, it is known that the diffusion SDE for the stochastic process formed by the particles is connected to a deterministic ODE via the FPE (Fokker-Planck Equation). Specifically, the solution of the FPE Eqn (3) of the (variance preserving) SDE is equivalent to that of the continuity equation (CE) of the ODE Eqn (2) when the velocity field $f(x,t)$ is set to Eqn (9). While this relation has been used in the reverse-time (generating) process of score-matching diffusion model [Song et al., 2021], the JKO approach here is trying to directly learn a discrete-time ODE forward process and avoids sampling in training. The forward process approximates the CE and equivalently the FPE when step-size is small thanks to the JKO theory [Jordan et al., 1998]. Meanwhile, unlike the score-based diffusion model, which tries to learn the score $\nabla \log \rho_t$, or equivalently the velocity field $f(x,t)$ that leads to exact solution of the FPE, with finite step size the JKO solution only approximates the solution of FPE. While such approximate sequence of densities $p_k$ do not have interpretation in physics, for the problem of normalizing flow it actually can provide a good CNF model without learning the exact solution of FPE: this is because as long as the last $p_k$ can be close enough to the normal density, and the transport in each learned residual block $T_k$ can be accurately inverted, this will guarantee the accuracy of the generating the data density in the reverse process. In this view, the JKO approach can explore the CNF models that transport from normal density to data density and back without requiring the intermediate process to exactly follow the diffusion process. The block-wise training scheme also allows savings and flexibility both in neural network architecture and in the training algorithm. Ref: Song et al. Score-Based Generative Modeling through Stochastic Differential Equations. ICLR. 2021. Jordan, Richard, David, Kinderlehrer, and Felix, Otto. "The Variational Formulation of the Fokker–Planck Equation". SIAM Journal on Mathematical Analysis 29, no.1 (1998): 1-17. * **(Question 1) Can you elaborate on the connections between the JKO iterative training and the score-matching training of diffusion models? It would be informative to discuss differences and similarities and give an overview of the strengths and weaknesses of each approach.** Thanks for the suggestion. We will explain the difference and similarities of the two in the “connection” section as in the answer to the previous question, and also add the following summary of strengths and weaknesses to the draft. The score-matching training has the advantage of more efficient optimization of the neural network that parametrizes the score function, thanks to the score-matching objective. In comparison, the current neural ODE model used in the JKO approach involves more expensive backpropagation of the neural network due to the ODE solver and the Hutchinson projection which undermines the scalability to higher dimensionality. Yet by paying this price, the CNF model has the advantage of optimizing the likelihood as the training objective and the ability to compute the likelihood by time integration. However, in training and generation the JKO approach has the advantage that it computes deterministic dynamics in both forward and reverse processes, avoiding simulating the SDE trajectories (injecting noise). The ODE integrator in JKO approach allows us to choose a much larger step size in the discrete time process compared to what is needed by the SDE simulation of score-based models. Another advantage of the JKO approach lies in the block-wise training. In addition to the savings of computation and memory, the block-wise training of the JKO-iflow model has the potential flexibility that different blocks may have different architectures, which may be advantageous in certain settings. This type of flexibility may not be easy to implement in the end-to-end training of most score matching diffusion models. Finally, the JKO scheme is general and may be combined with other flow-based backbone models to overcome the computational issue, possibly with specific designs for certain applications, see more in the answer to R1. --- Rebuttal Comment 1.1: Comment: Thank you for the rdetailed response. I am happy keep my score and to recommends acceptance.
Summary: This paper presents an innovative method for enhancing the stability of trajectories in CNF (Continuous Normalizing Flow) models. The authors propose incorporating the JKO scheme, which introduces regularization between the current density and the base distribution. The core concept revolves around leveraging the Wasserstein distance to learn an optimal transport map that guides the data distribution towards a normal equilibrium. To assess the effectiveness of the model, experiments are conducted on various scenarios, including toy examples, tabular data, and datasets characterized by higher dimensions like CIFAR or MNIST. Strengths: - The idea behind the paper is novel and non-trivial. - Optimizing the velocity field instead of the transport map using JKO-Scheme is intriguing. - The procedure of training the model given the Algorithm is very efficient compared to reference baselines. - The method that can be seen as a particular case of regularisation of CNF models achieves outstanding results on tabular benchmark datasets. - The authors consider higher-dimension cases by conducting experiments on CIFAR. It is not common for such papers. Weaknesses: - The proposed method still does not provide good-quality image samples. This group of models is rather dedicated to obtaining high values of NLL and modeling the distributions of low-dimensional data. - The variety of experiments conducted in this work may be richer. In particular, it would be interesting to see how the proposed approach works in some scenarios where CNFs perform well, like modeling multidimensional probabilistic regression, using the model in latent space of VAEs, or as a plugin model to large models (like StyleFlow). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: I do not have any particular questions for the authors. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The main limitation of the proposed approach is the lack of capabilities to generate good-quality of image data, but this group of methods is rather not designed for such problems. Moreover, the evaluation may be a bit broader and include some other applications mentioned in the "weaknesses" section. Besides that, I admire the contribution of the work. and I think it should be accepted to the conference. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Please refer to the global response for the question on image quality and additional results of image generation using VAE latent space. * **(Weakness 2) The variety of experiments conducted in this work may be richer. In particular, it would be interesting to see how the proposed approach works in some scenarios where CNFs perform well, like modeling multidimensional probabilistic regression, using the model in latent space of VAEs, or as a plugin model to large models (like StyleFlow).** We thank the reviewer for suggesting a rich class of applications which are suitable scenarios to apply CNF models. For the latent space of VAEs, our real image generation experiments (CIFAR10 and the newly added Imagenet-32) are actually obtained by applying the JKO-iFlow model in the VAE latent space, see more in the global response above. We agree that other problems like multi-dimensional probabilistic regression [Chen et al., 2018] and plugins to deep architectures like StyleFlow [Abdal et al., 2021] are interesting extensions. We will add all this to the discussion of future directions. Ref. Chen R T Q, Rubanova Y, Bettencourt J, et al. Neural ordinary differential equations. NeurIPS 2018. Abdal R, Zhu P, Mitra N J, et al. Styleflow: Attribute-conditioned exploration of stylegan-generated images using conditional continuous normalizing flows. ACM Transactions on Graphics (ToG), 2021. --- Rebuttal Comment 1.1: Title: Thank you Comment: Dear authors, thank you for the rebuttal. I maintain my decision, and I believe the paper should be accepted.
Summary: This paper proposes learning an invertible normalising flow using a neural ODE as a unique solution to the Fokker-Planck Equation (FPE) of the transport problem from the data distribution to the equilibrium solution. The solution of the FPE is obtained by using the JKO scheme by formulating it as a variational problem. The critical contribution of the paper is that each residual block of invertible neural ODE corresponds to a JKO step and the training objective can be computed from pushed data samples through the previous blocks. This leads to a block-wise procedure to train the JKO-iFlow model which is computationally more efficient than an end-to-end approach in the literature. The paper also proposes a way to determine the number of blocks adaptively by adaptively reparameterize the computed trajectory in the probability space with refinement to improve the model accuracy and the overall computational efficiency. Lastly, they empirical show that their method is competitive or better generative performance compared to existing flow and diffusion models on synthetic and real data, Strengths: This paper is novel, and the idea of using iterative block-wise training is a good one. This strategy can be applied to other works as well and I think is a good idea. The formulation of the loss function from the JKO-step is also very well-written and clear. One strength of the paper is that their proposed method is very flexible as they used neural ODE with sufficiently small step sizes in the residual blocks to invertibility making their method very general and doesn't have any restrictive architecture like is typical of neural of other invertible flow models. Weaknesses: The experiments are on relatively small datasets. It would be instructive to test how the model performs on larger datasets like miniImagNet or even CelebA dataset. The MMD loss could try to use different kernels or even mixed kernels to evaluate the quality better of the generative models. When comparing JKO-iflow against DDPM, it might be valuable to discuss what noise scheduling for DDPM is being used for each dataset, as DDPM is sensitive to the noise schedule used. Thus, it might lead to an unfair comparison if DDPM are not properly tuned. The authors can also include inverse problems in their experiments as many one of the advantages of DDPM is the fact they can easily be adapted to tackle inverse problems as well. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could there be probability trajectories that are particularly "stiff" much like how there are stiff ODE that require very small step sizes for the numerical ODE method to be stable and have the resulting solution be close to the actual trajectory? The FPE does not only appear in image generation, in mean-field game theory the fokker-planck equation describes the dynamics of the aggregate distribution of agents can JKO-iflow work well on these other problems ? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors adequately address the limitations of their work, and there are no obvious potential negative social impacts of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Please refer to the global response for the questions on experiments of image generation. * **Different kernels in MMD to evaluate generative models** We agree with the reviewer that the MMD loss metric depends on the choice of the kernel, and a variety of kernels may be adopted for evaluating the trained generative model. In our experiments, we adopt the Gaussian kernel to stay consistent with reported baselines from the literature [Onken et al., 2021] for the purpose of comparison. Being aware of the sensitivity of the kernel MMD to the choice of the kernel parameter, we performed MMD calculations with different bandwidth selections. In addition, to ensure that the comparison of MMD values is meaningful, we applied bootstrap to compute the threshold of the MMD test under null (when two distributions are the same), and all values to be compared are above the threshold. The details of the MMD tests, including bandwidth selection, are provided in Appendix C.1.2. Further evaluation of the generative models using more metrics would be an interesting direction for future work. * **Noise schedule for DDPM to ensure fair comparison** We thank the reviewer for the suggestion, and have conducted additional experiments of different noise schedulers following DDPM [Ho et al., 2020] to ensure fair comparison. Below, we use three schedulers (i.e., the $\beta(t)$ definition in [Song et al., 2021]) with five different choices for $\bar{\beta}_{\max}$ in the variance-preserving ScoreSDE model, for which DDPM can be viewed as a discrete-time version. As shown in Table 1 below, the performance of DDPM is indeed sensitive to the noise schedule. However, the best NLL of DDPM 17.45 from the table is still noticeably higher than the NLL of 12.55 obtained by JKO-iFlow, and the latter is trained using ten times less number of batches. Table 1: NLL per noise scheduler and $\bar{\beta}_{\max}$ combination, the three settings ‘linear, constant, quadratic’ follow the DDPM paper [Ho et al., 2020]. The mean and standard deviation are computed over three replicas of the trained model |Noise Scheduler\\ $\bar{\beta}_{\max}$|1|5|10|15|20| |-|-|-|-|-|-| |linear|24.51 (0.24)|18.73 (0.64)|20.28 (0.38)|26.76 (1.77)|26.83 (1.23)| |constant|25.47 (0.25)|30.37 (0.84)|37.73 (0.98)|40.47 (1.27)|45.32 (0.40)| |quadratic|27.19 (0.14)|17.45 (0.17)|18.48 (0.38)|18.90 (0.48)|21.35 (0.60)| Ref. Ho et al. Denoising diffusion probabilistic models. NeurIPS 2020 Song et al. Score-based generative modeling through stochastic differential equations. ICLR 2021. * **To include inverse problems in experiments for which DDPM has advantage** We appreciate the reviewer's suggestion on the application to inverse problems. For the application to image data, image restoration problems can be cast as a conditional generation task. In this work, we have investigated conditional generation in Section 5.5 exemplified by graph data, where we applied the proposed model to learn from data the conditional distribution $X|Y$, where $X$ and $Y$ are nodal descriptors and labels on a graph, respectively. We demonstrated that JKO-iFlow outperforms the previous baseline in terms of generative quality, NLL, and MMD metrics. This setting may also be viewed as an inverse problem. The extension to other inverse problems is an interesting future direction. * **Can probability trajectories be stiff and require small step size to ensure stability and accuracy of the numerical ODE?** We thank the reviewer for asking the interesting question. First, in the context of JKO-iFlow, a neural ODE block is used to parametrize the $k$-th step transport map $T_k$, which pushwards from $p_{k-1}$ to $p_k$. Theoretically, as long as both distributions have smooth densities, there exists a smooth transport map in between [Villani 2021]. Meanwhile, since the JKO scheme in the case of a small step size approximates the solution of a diffusion process FPE, which smoothes the distributions as time increases, we expect the discrete-time densities $p_k$ to also become more and more smooth as $k$ increases. In the case that the initial-time density (the data distribution) is not regular, this will make the flow at the early time irregular, and we observed that this is the place where “stiffness” may more often present. As the reviewer pointed out, this can require a small step size of the neural ODE. The time-reparametrization introduced in Section 4.2 can be viewed as an adaptive way to adjust the step size to overcome this difficulty, and we have shown that this improves the performance of the JKO-iFlow model in experiments, specifically on toy data and MINIBOONE tabular data (see Sections 5.2 and 5.3). There are other works that address ODE stiffness and improve the stability of neural ODE, such as [Kim et al., 2021], which can be readily incorporated into our framework if needed. Ref. C´edric Villani. Topics in optimal transportation, volume 58. American Mathematical Soc., 2021. Kim S, Ji W, Deng S, et al. Stiff neural ordinary differential equations. Chaos: An Interdisciplinary Journal of Nonlinear Science, 2021, 31(9). * **FPE also appears in mean-field games, and can JKO-ifow work well on these other problems?** We thank the reviewer for suggesting the application of the JKO-iFlow model to broader problems of mean-field games (MFG). The current work focuses on the continuous normalizing flow (CNF) setting and the application to generative models. Under the framework of MFG from time $0$ to $T$, if one sets the terminal cost as the $KL$-divergence between the density $\rho_T$ and $p_Z = N(0, I)$, then the CNF problem is a special case of the MFG problem. We think that the JKO-iFlow model potentially can be applied to other MFG problems, and such extensions would be a very interesting future direction.
Summary: The authors introduce a novel normalizing flow training algorithm that integrates continuous normalizing flows and the JKO scheme. The objective of the training algorithm is to minimize KL divergence between the current density and the equilibrium density. The proposed method is inspired by the JKO scheme to enable adaptive blockwise training of residual networks. Furthermore, the authors demonstrate their method on synthetic and open-source datasets. Strengths: 1). The proposed method is computationally efficient compared to other normalizing flow methods or diffusion. 2). Novel idea of optimizing the velocity field instead of the transport map motivated JKO-Scheme. 3). The paper is well-motivated and easy to understand. Weaknesses: 1). The paper lacks metrics/empirical validation on larger images. FID metrics for Figure 4 should be necessary considering the paper is about generative modeling and its challenging to tell how adequate the CIFAR-10 samples are because they have such low resolution via eye test. 2). A significant highlight is that block-wise training leads to a large reduction in computation cost compared to other popular generative models (Figure 4), but the authors do not highlight that the proposed scheme uses a pre-trained auto-encoder, where some of the other methods do not. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1). Is it possible for the authors to provide FID scores for both MNIST and CIFAR-10? 2). How good is the proposed method without using a pre-trained auto-encoder? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations are clearly discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Please refer to the global response for the questions on experiments of image generation and FID scores. * **(Weakness 2 & Question 2): The proposed scheme uses a pre-trained auto-encoder, where some of the other methods do not. How good is the proposed method without using a pre-trained auto-encoder?** We thank the reviewer for asking the good question. We mainly propose to use the current method in the latent space for image generation tasks, which will be clarified in the revised draft. Directly applying the current approach to image pixel space is possible, but would incur high computational cost due to the likelihood-based training of the neural ODE backbone model, see more in the global response on "the extension to larger images". Consequently, the computational advantage of the block-wise training would be discounted. At the same time, we would like to point out that the block-wise training of the JKO scheme aims to compute a sequence of transported distributions that get closer to the normal density progressively, and this may be combined with other backbone models that are more suitable for computation in image pixel space. One possibility is by adopting the flow models trained to match the desired velocity field [Albergo et al., 2023; Lipman et al., 2023]. We will clarify the limitations and discuss future directions in the discussion section. Ref. Albergo M S, Vanden-Eijnden E. Building normalizing flows with stochastic interpolants. ICLR 2023. Lipman Y, Chen R T Q, Ben-Hamu H, et al. Flow matching for generative modeling. ICLR 2023. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: I want to thank the authors for their detailed responses. I am happy to keep my score the same and recommend this paper for acceptance.
Rebuttal 1: Rebuttal: Thanks for the constructive comments and feedback provided by all the reviewers. The common questions are about the experiments on image generation, which we first address here. The additional questions and comments of each reviewer are addressed in the specific responses below. R1 = Reviewer NZGt, R2 = Reviewer uXKQ, R3 = Reviewer xTiV, R4 = Reviewer 3zMK. * **Quality of generated images, quantitative evaluation (FID), and additional image examples. [R1 Question 1, R3 Weakness 1 & Limitations, R4 Question 2]** We have conducted additional experiments on CIFAR10. Please see the attached PDF for generated images by the final model, which show improved image quality. The model achieves an FID of 29.10. The improvement from the initial submission comes from using a larger VAE model following [Rombach et al., 2022] and longer training time (24 hours on one A100 GPU, the VAE training is extra). We have also computed the FID for MNIST, which is 7.95 using JKO-iFlow in the code space of a pre-trained VAE. To enlarge the image generation experiments, we further examined our model on ImageNet-32, which consists of 1.28 million training images. The generated images are also shown in the attached PDF, and the model achieves an FID of 20.10 after training for 30 hours on one A100 GPU. We are aware that these FIDs do not achieve the SOTA performance of some diffusion-based models, but these results are obtained with less computation and this image generation performance is better than most CNF baselines. We will incorporate these results in the paper revision. * **Extension to larger images [R1 Weakness 1, R2 Weakness (1st-2nd sentences), R4 Weakness paragraphs 1 & 2]** We thank the reviewers for the suggestion, and larger images are certainly an important application direction of the proposed approach. Our current method for image generation is by applying a flow model in the latent VAE space. Note that generative models in the latent space, like StableDiffusion [Rombach et al., 2022], obtain SOTA image generation and are popular approaches for many industry applications. Thus, the VAE+flow approach has the potential to generate larger images. Meanwhile, we do think the extension to larger images would be more efficient if some fundamental development of the neural ODE can be done in the first place. A main reason is that the neural ODE backbone model currently taken by this work does not have a strong computational scalability to high-dimensional space due to the expensive backpropagation via ODE solver and the Hutchinson projection to compute $div(f)$ in the likelihood-based training. The current JKO approach is for general data and not specifically designed for large image generation tasks (as has been pointed out by R3). To extend the JKO approach to larger images, one can proceed by developing a more efficient neural ODE to scale to higher dimensional latent or pixel space or by combining the JKO scheme with other backbone models more suitable for image generation tasks, see more in the answer to R1 below. We leave these developments to future work and will add all this to the discussion. Ref. Rombach R, Blattmann A, Lorenz D, et al. High-resolution image synthesis with latent diffusion model. CVPR. 2022 Pdf: /pdf/8f069e6d545e9a949cf1f89d5df01ef5abd01756.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Learning Adversarial Low-rank Markov Decision Processes with Unknown Transition and Full-information Feedback
Accept (poster)
Summary: The paper studies low-rank MDPs with adversarially changing losses in the full-information feedback setting. They assume the unknown transition probability function admits a low-rank matrix decomposition. They present POLO algorithm, a policy optimization-based algorithm, and prove it has sublinear regret in the number of episodes $K$, but linear dependency in $d$ which is the rank of the transition kernel. The regret is w.r.t the optimal policy for the discounted return value function. The authors claim they are the first to present and algorithm that interleaves representation learning, exploration, and exploitation to achieve the sublinear regret guarantee for RL with nonlinear function approximation and adversarial losses. Strengths: 1. POLO is the first algorithm that achieves sublinear regret with unknown features (although the setting is full information). 2. The authors provide a clear comparison between their work to previous works in the area, which emphasis their contribution. 3. Regret upper bound stated clearly, the analysis sketch is standard and clear (not fully checked). Weaknesses: 1. The full information feedback is a restrictive and unrealistic assumption. 2. The extensive literature review in Section 2 can be shorter, and instead use that space to have more proof sketches. That, in my opinion, can benefit the reader. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What is the difference between low-rank MDPs to Generalized Linear Model? As I see it, GLM is more general model. Do your results hold for this setting as well? 2. Same question for low Bellman rank MDPs, that also seem to generalize low-rank MDPs. 3. Do you have thoughts about how to remove the full-information assumption? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments and suggestions. Our response to each question is provided in turn below. **Q1. "The full information feedback is a restrictive and unrealistic assumption."** We do believe that extending the analysis of our work to the bandit feedback case is an interesting and important next step, as mentioned in Section 6. However, we would like to note that, even if in linear mixture MDPs with given true feature mappings, it is also common to first study adversarial environments with full-information feedback [1,2], before moving to the bandit feedback case [3]. Moreover, the SOTA regret guarantee obtained by policy optimization (PO) based methods for the more amenable adversarial linear MDPs with given true features in the full-information setting is $\widetilde{O}(K^{3/4})$ (omitting other dependence), as shown by a concurrent work [4]. Our algorithm obtains the regret guarantee with the same dependence on $K$ and can additionally work when no true feature mappings are known a priori. Besides, we would like to remark that, learning adversarial linear MDPs with bandit feedback is already rather challenging even with the true feature mappings given, where the SOTA regret is only of $\widetilde{O}(K^{6/7})$ order [5] (to the best of our knowledge) when no simulators or exploratory assumptions are given. **Q2. "The extensive literature review in Section 2 can be shorter, ... can benefit the reader."** We will revise our paper accordingly to make Section 2 more succinct and include more proof sketches for better readability. In specific, we plan to (a) merge Section 2.1 and 2.2 into a new section; and (b) include more proof sketches and discussions of our Theorem 5.1 and related key lemmas in Section 5. **Q3. "What is the difference between low-rank MDPs to Generalized Linear Model? ..."** In GLMs, it is assumed that $\mathbb{E}[Y \mid X]=\mu\left(X^\top \theta^\star\right)$, where $\theta^\star \in \mathbb{R}^d$ is the unknown parameter, $X$ is the **known** feature vector, $Y$ is the response variable, and $\mu: \mathbb{R} \rightarrow \mathbb{R}$ is the link function. On the other hand, in low-rank MDPs, the transition $P^\star$ is assumed to admit a low-rank decomposition, meaning that $\mathbb{E}[\mathbb{I}\{s=s_{t+1}\} \mid s_t,a_t]=\mu^{\star}\left(s\right)^{\top} \phi^{\star}(s_t, a_t)$, where both the feature vectors $\mu^{\star}\left(s\right)$and $ \phi^{\star}(s_t, a_t)$ are **unknown**. Therefore, in general, these two models can not imply each other and the results for each model might not be directly comparable. **Q4. "Same question for low Bellman rank MDPs, ..."** Indeed, there exist some other works studying RL with general function approximation [6,7,8], which subsumes low-rank MDPs as a special case. However, we note that the algorithms in these works are version space algorithms and thus are generally not computationally efficient. In contrast, the algorithms in previous works studying low-rank MDPS including ours are computationally feasible in practice, as discussed in our Section 7. Besides, they assume deterministic reward functions, and thus their results are not applicable in our setting. **Q5. "Do you have thoughts about how to remove the full-information assumption?"** To tackle bandit feedback, if we restrict our attention to PO-based methods, one may consider the *dilated* bonuses to facilitate improved global exploration, which is vital for current PO-based methods to learn adversarial linear MDPs with bandit feedback [9,10,5]. Nevertheless, the construction of such dilated bonuses critically relies on the unknown true feature mapping $\phi^\star(\cdot,\cdot)$, and thus it is currently not clear whether they can be extended to low-rank MDPs. Also, as aforementioned, the SOTA regret of such methods to learn adversarial linear MDPs with bandit feedback is only of $\widetilde{O}(K^{6/7})$ order [5]. Besides, if occupancy measure-based methods are also under consideration, the feasible way might be constructing some sort of confidence set for the learned empirical transitions, like the confidence intervals constructed in tabular MDPs [11] and the ellipsoid confidence sets constructed in linear mixture MDPs [3], to construct optimistically biased loss estimates to tackle bandit feedback. However, as mentioned in Section 6, it is also not very clear how to guarantee such point-wise optimism at each state-action pair in low-rank MDPs. Overall, learning adversarial low-rank MDPs with bandit feedback is indeed interesting and also particularly challenging, and we believe our work may serve as an important step to further tackle this problem, which we leave as our future study. [1] Cai et al. Provably Efficient Exploration in Policy Optimization. ICML, 20. [2] He et al. Near-optimal Policy Optimization Algorithms for Learning Adversarial Linear Mixture MDPs. AISTATS, 22. [3] Zhao et al. Learning adversarial linear mixture markov decision processes with bandit feedback and unknown transition. ICLR, 23. [4] Zhong et al. A Theoretical Analysis of Optimistic Proximal Policy Optimization in Linear Markov Decision Processes. arXiv, 23. [5] Sherman et al. Improved Regret for Efficient Online Reinforcement Learning with Linear Function Approximation. ICML, 23. [6] Jiang et al. Contextual decision processes with low bellman rank are pac-learnable. ICML, 17. [7] Sun et al. Model-based rl in contextual decision processes: Pac bounds and exponential improvements over model-free approaches. COLT, 19. [8] Du et al. Bilinear classes: A structural framework for provable generalization in rl. ICML, 2021. [9] Luo et al. Policy optimization in adversarial mdps: Improved exploration via dilated bonuses. NeurIPS, 21. [10] Dai et al. Refined Regret for Adversarial MDPs with Linear Function Approximation. ICML, 23. [11] Jin et al. Learning Adversarial Markov Decision Processes with Bandit Feedback and Unknown Transition. ICML, 20. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response and have no further questions. --- Reply to Comment 1.1.1: Title: Author Response Comment: Thank you for your response to our rebuttal! If there are any additional questions or comments, we would also be more than happy to address them.
Summary: This work focuses on the low-rank MDPs with adversarial losses in the full-information feedback setting. Different from the previous work which assumes known features, this work considers the combination of representation learning and regret minimization problem, and is of the first result under this specific topic. The low-rank MDPs are defined in Definition 3.1, which admit certain low-rank feature structure that the transition dynamics can be presented with respect to two feature embedding functions. As described in the interaction protocol, the learner does not have accesses to the feature functions, and is required to learn these functions via the interaction between the environment. While the learner has to collect enough samples to estimate the feature functions, the goal of the learner is to minimize her regret (defined in Line 201), that is, the difference between the cumulative loss suffered, and that of the optimal policy in hindsight. The loss functions are selected adversarially that can change from episode to episode arbitrarily. To tackle these two problems simultaneously, the authors combine the techniques of representation learning and the RL with linear MDPs and adversarial losses together into Algorithm 1. Line 5-10 follow the idea of Uehara et al. [2022] to collect the samples for representation learning, Line 11-17 follow the idea of learning adversarial Linear MDPs like the work of Cai et al. [2020], given the estimated feature functions solved in Eq. (1). The result is presented in Theorem 5.1, which guarantees that the regret of Algorithm 1 is bounded by $\widetilde{\mathcal{O}}( \frac{K^{3/4}}{1-\gamma} + \frac{\sqrt{K}}{(1-\gamma)^2})$, where the first term is attributed to the representation learning of unknown features, and the second term is for the adversarial losses. Strengths: 1. Prior to this work, it is unknown whether the regret minimization problem and the representation learning problem can be dealt simultaneously with adversarial losses. This work is the first one to achieve sub-linear regret, which sheds light on this direction. 2. Based on the previous works on the representation learning of MDPs (such as Uehara et al. [2022] and Agarwal et al. [2020]), and on the linear MDPs (or low-rank MDPs) with adversarial losses (such as Cai et al. [2020] and Luo et al. [2021a]), the authors combine the techniques of these two fields hierarchically, under a novel scheme `doubled exploration and exploitation', to address the regret minimization problem. Weaknesses: 1. The designed algorithm suffers from huge computation cost (Line 13, 15, 16, and 17) when the state space is very large, which is the same as the previous algorithm OPPO in the previous work of Cai et al. [2020]. It is not very clear whether this could be improved with any planner oracle. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. This result $\mathcal{O}(K^{3/4})$ does not match the $\sqrt{K}$ regret lower bound for the adversarial losses. Any specific conjecture on the lower bound/upper bound? 2. Any conjecture on the bandit feedback setting when limited the MDPs to the linear Mixture MDPS of Zhao et al. [2023]? 3. The loss function $\ell_k$ is designed as a function of $ S \times A \rightarrow [0,1]$, which does not admit the linear structure. Any idea to improve the result given the fact that $\ell_k(s,a) = \theta_k^\top \phi^\star(s,a) $ when $\theta_k$ is revealed at the end of the episode $k$? 4. The second term in the regret bound, comparing with the regret bound of OPPO and LSUOB-REPS, shaves a factor of $d$? Could you explain why this holds true, and whether this implies that your algorithm achieves better performance given the feature functions? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This work is pure theoretical, does not have any potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments and suggestions. Our response to each question is provided in turn below. **Q1. "The designed algorithm suffers from huge computation cost ...."** Indeed, our algorithm has nearly the same computation efficiency as previous works studying policy-optimization (PO) methods for RL with linear function approximation and adversarial losses [1,2], except for the MLE procedure over the experienced transitions proceeded on Line 13 of our algorithm, which might be an additional computation bottleneck due to its nonconvex nature. However, as mentioned in Section 7, the same MLE procedure is also required in previous works studying low-rank MDPs [3,4,5], arising from the fact that the underlying feature mappings are no longer known a priori in low-rank MDPs. More importantly, though being a nonconvex optimization problem, the MLE procedure can be approximately solved by gradient descent methods (*e.g.*, using neural networks), which is thus computationally feasible in practice. **Q2. "Any specific conjecture on the lower bound/upper bound?"** We conjecture that the part dealing with adversarial losses in our regret upper bound is tight. Besides, the dependence on action space size $A$ of the part in our upper bound dealing with representation learning is tight as shown by our upper bound as well as our new regret lower bounds (please see this in our response to reviewer n9xa), and we conjecture that the dependence on $\gamma$ of this part in our upper bound is tight but the dependence on $K$ might be further improved. **Q3. "Any conjecture on the bandit feedback setting when limited the MDPs to the linear Mixture MDPS of Zhao et al. [2023]?"** When learning in linear Mixture MDPS with given true feature mapping, we can construct ellipsoid confidence sets of transitions so as to construct optimistically biased loss estimates to tackle bandit feedback as in [7], but this is not aligned with our motivation to study adversarial RL where the true feature mappings are not known a priori, and thus beyond the scope of our paper. **Q4. "Any idea to improve the result given the fact that $\ell_k(s, a)=\theta_k^{\top} \phi^{\star}(s, a)$ when $\theta_k$ is revealed at the end of the episode $k$?"** Since we study adversarial low-rank MDPs with full-information feedback, imposing structural assumptions over loss functions or not actually does not affect the polynomial dependence on the number of entries of the loss functions. Indeed, due to our Lemma 5.1, the regret contributed by dealing with the adversarial loss functions does not have polynomial dependence on $S$ or $A$. In fact, as shown by, for example, Proposition 28.6 and 28.7 in [6], optimizing the online linear optimization problem with full-information feedback using OMD leads to no polynomial dependence on the number of entries of the loss functions in the unit-ball and probability simplex space, even if there are no structural assumptions imposed over the loss functions. On the other hand, in the bandit feedback case, additional structural assumptions over the loss functions are indeed needed to lift the dependence on the number of entries of the loss functions. **Q5. "The second term in the regret bound, comparing with the regret bound of OPPO and LSUOB-REPS, shaves a factor of $d$? Could you explain why this holds true, and whether this implies that your algorithm achieves better performance given the feature functions?"** Indeed, the second term in our upper bound does not depend on $d$, but this does not mean that our regret upper bound shaves a factor of $d$, compared with previous works [1,2,7]. Specifically, in adversarial environments with full-information feedback, the regret contributed by dealing with the adversarial losses (*i.e.*, the OMD Regret Term in Eq. (4) of our paper) does not have polynomial dependence on dimension $d$ as well as $S$ and $A$, as previous works studying adversarial linear mixture MDPs with full-information feedback [1,2]. For instance, the analogous term in [2] is their term $I_1$, which can be bounded by $\widetilde{O}(H^2\sqrt{K})$ as shown in their Section 6 and thus corresponds to our second term $\widetilde{O}(\sqrt{K}/(1-\gamma)^2)$. For the dependence on $d$ of regret in [1,2], it mainly comes from the difference between the values of the same policy in the learned and true models, which results from the model error and corresponds to the Estimation Bias Term in Eq. (4) of our paper. Such dependence on $d$ appears in both their bounds and the first term in our bound. For the dependence on $d$ in [7], it comes from the estimation error between the occupancy measures induced by the learned model and true model respectively, which is thus also due to the model error. In a nutshell, the fact that there is no polynomial dependence on $d$ in the second term of our regret upper bound does not conflict with the results in previous works. For the same reason, it might not be appropriate to interpret this as that our algorithm will achieve better performance given the feature functions. Thanks for pointing this out and we will include the above discussions in the revision of our paper for clarity. [1] Cai et al. Provably Efficient Exploration in Policy Optimization. ICML, 20. [2] He et al. Near-optimal Policy Optimization Algorithms for Learning Adversarial Linear Mixture MDPs. AISTATS, 22. [3] Agarwal et al. FLAMBE: structural complexity and representation learning of low rank mdp. NeurIPS, 20. [4] Uehara et al. Representation learning for online and offline RL in low-rank mdps. ICLR, 22. [5] Ni et al. Representation learning for general-sum low-rank markov games. ICLR, 23. [6] Lattimore et al. Bandit algorithms. Cambridge University Press, 20. [7] Zhao et al. Learning adversarial linear mixture markov decision processes with bandit feedback and unknown transition. ICLR, 23. --- Rebuttal Comment 1.1: Title: Thank the authors for their response Comment: Thank the authors for their response. I would like to keep my score. Good work! --- Reply to Comment 1.1.1: Comment: Thank you for your positive feedback for our work! We are also happy to discuss any further questions.
Summary: The authors consider the problem of learning an adversarial low-rank infinite episodic MDP with unknown transition and full information feedback. The idea behind this problem is that in many RL applications, the state and action spaces might be prohibitively large, rending results that scale with these metrics meaningless. Several approaches have used the assumption that there is a feature mapping state-action pairs to a low dimensional embedding space. While the common practice is to assume that this embedding is known, in the present problem the learner has to learn how the data is represented. The second challenge faced by the learner is that the loss functions change adversarially. These two challenges have been studied separately, but this work is the first to study them simultaneously. To do so, they work under the assumption that there exists an unknown low-rank embedding of the state-action pairs and propose an algorithm to tackle this problem: Policy Optimization for LOw rank MDPS (POLO). This algorithm relies on the standard Online Mirror Descent technique, combined with a bonus function which lowers the observed losses to favorize exploration. The authors then provide a rigourous analysis of the regret and show that their algorithm reaches sublinear regret. The regret analysis follows a standard decomposition of the regret into an estimation bias term, the OMD regret term and an optimism term, which respectively compare the true value function with the eztimated value function of the policy played by the learner, the difference between the performance of the policy played by the learner and the best policy in hindsight on the estimated value function, and finally the difference between the estimated value function and the true value function for the best policy. Strengths: This paper tackles a new and challenging problem by combining two notably difficult aspects of online learning with MDPs: handling adversarial losses and having to learn the state-action mapping. Each of these problems has received a lot of attention in the past few years and are really relevant for the online learning community. The authors achieve their goal by presenting an algorithm that adapts to both of these aspects simultaneously and obtains sublinear regret. The algorithm that they present buil;dss upon standard tools in the litterature and has an appreciably clear and compact analysis. The authors properly discuss the limitations of their work, notably by questioning the computational efficiency of their approach and ensuring that their algorithm is coomputationally feasible. Overall, this paper will provide a good baseline for the problems of representation learning of MDPs in adversarial environments. Weaknesses: It would be interesting to provide a the lower bound for the problem: from the simple online learning with full information feedback, having at least $\sqrt{K}$ regret from playing adversarial losses is unavoidable. In this work, it is stated that the first term in the regret bound, which scales with $K^{3/4}/(1 - \gamma)$ comes from learning the unknown transition kernel $P^\star$. Do you have a lower bound for this part the representation learning part of the problem? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Is there a lower bound for this problem, and could you elaborate on the cost of learning the representation? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors provided an interesting discussion of the potential limitations of their work and addressed them adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments and suggestions. Our response to each question is provided in turn below. **Q1. "Is there a lower bound for this problem, and could you elaborate on the cost of learning the representation?"** We now provide a lower bound for the representation learning part of the problem, in the setting of learning low-rank MDPs with deterministic reward functions, which thus also serves as a regret lower bound for learning adversarial low-rank MDPs with full-information feedback. To the best of our knowledge, this is the first regret lower bound for low-rank MDPs. At a high level, we construct $dA$ hard MDP instances, which are difficult to distinguish in KL divergence but have very different optimal policies. Similar instances are first introduced to prove the regret lower bounds for tabular MDPs [1,2] and are recently used to prove the lower bound of sample complexity for learning low-rank MDPs by [3]. **Theorem 1** (Regret Lower Bound) Suppose $d\geq 8$, $S\geq d+1$, $A\geq d-3$, and $K\geq 2(d-4)A$. Then for any algorithm $\operatorname{Alg}$, there exists an episodic infinite-horizon low-rank MDP $\mathcal{M}_{\operatorname{Alg}}$ with fixed reward function such that the expected regret for this MDP is lower bounded by $\Omega\left(\frac{\gamma^2}{1-\gamma} \sqrt{d A K}\right)$. **Remark 1** We note that this regret lower bound can hold when $d\ll S$ and $d\ll A$, which thus means that this lower bound is non-trivial. Besides, our Theorem 5.1 matches the regret lower bound in $A$ up to a logarithmic factor but looses a factor of $\widetilde{O}(K^{1/4}d^{1/2}/\gamma^2)$. More importantly, compared with the regret upper bound $\widetilde{O}\left(d \sqrt{ K/(1-\gamma)^3}\right)$ of linear MDPs [5] (the finite horizon $H$ is substituted by the effective horizon $\Theta(1/(1-\gamma))$), the dependence on $A$ in the regret lower bound of low-rank MDP shows a clear separation between low-rank MDPs and linear MDPs, which demonstrates that low-rank MDPs are statistically more difficult to learn than linear MDPs in the regret minimization setting. **Proof of Theorem 1** To begin with, we first construct a reference low-rank MDP $\mathcal{M}_0$, with its elements detailed as follows: * State space: $\mathcal{S}=\\{s_{1,1},s_{2,1},s_{2,2},\ldots,s_{2,d-4}\\}\cup\\{s_g,s_b\\}\cup\\{s_o\\}\cup\mathcal{S}_{\mathcal{O}}$, where $\mathcal{S}_{\mathcal{O}}= \{s_{o,i}\}_{i=1}^{S-d}$ denotes the set of 'outlier states', $s_g$ denotes the 'good state', and $s_b$ denotes the 'bad state'. * Action space: $\mathcal{A}=\\{a_1,a_2,\ldots,a_{A}\\}$. * Reward function: $r(s,a)=\mathbb{I}\\{s=s_g\\}+\frac{1}{2}\mathbb{I}\\{s\in\mathcal{S}_{\mathcal{O}}\\}$. * Transitions: * For the initial state $s_{1,1}$, the learner will deterministically transit to state $s_{2,i}$ if taking action $a_i$, $\forall i\in[d-4]$, and will transit to state $s_o$ otherwise. Formally, $P^\star\left(s_{2,i} \mid s_{1,1}, a_i\right)=1$, $\forall i\in[d-4]$, and $P^\star\left(s_o \mid s_{1,1}, a_i\right)=1$, $\forall i\in[A]\setminus[d-4]$. * For state $s_{2,i}\in\\{s_{2,1},s_{2,2},\ldots,s_{2,d-4}\\}$, the learner will transit to good state $s_{g}$ and bad state $s_b$ uniformly at random, no matter what action it takes, $\textit{i.e.}$, $P^\star\left(s_{g} \mid s_{2,i}, a\right)=P^\star\left(s_{b} \mid s_{2,i}, a\right)=\frac{1}{2}$, $\forall i\in[d-4]$ and $a\in\mathcal{A}$. * For states $s_o$ and $s_{o,i}\in\mathcal{S}_{\mathcal{O}}$, the learner will uniformly transit to a state $s_{o,j}\in\mathcal{S}_{\mathcal{O}}$, no matter what action it takes. Formally, $P^\star\left(s_{o,j} \mid s_o, a\right)=P^\star\left(s_{o,j} \mid s_{o,i}, a\right)=\frac{1}{S-d}$, $\forall s_{o,i},s_{o,j}\in\mathcal{S}_{\mathcal{O}}$ and $a\in\mathcal{A}$. * For states $s_g$ and $s_b$, the learner will stay at the current state no matter what action it takes, which means that $P^\star\left(s_{g} \mid s_g, a\right)=P^\star\left(s_{b} \mid s_{b}, a\right)=1$. Further, the transitions of the above MDP can be realized by $P^\star(s^\prime\mid s,a)=\langle\phi^\star(s,a),\mu^\star(s^\prime)\rangle$, with the following features, which thus implies that this MDP is indeed a low-rank MDP: $\mu^\star(s_{2,i})=\mathbf{e}_i,\forall i\in[d-4]$ $\mu^\star(s_{1,1})=\mathbf{0}$ $\mu^\star(s_o)=(0,\ldots, 0,1,0)$ $\mu^\star(s_g)=(0, \ldots, 0,1,0,0,0)$ $\mu^\star(s_b)=(0, \ldots, 0,0,1,0,0)$ $\mu^\star(s_{o,j})=(0,\ldots,0,1)$ $\phi^\star(s_{1,1},a_i)=\mathbf{e}_i,\forall i\in[d-4]$ $\phi^\star(s_{1,1},a_i)=(0,\ldots, 0,1,0),\forall i\in[A]\setminus[d-4]$ $\phi^\star\left(s_{2,j}, a\right)=(0, \ldots, 0, \frac{1}{2}, \frac{1}{2}, 0,0),\forall a\in \mathcal{A}$ $\phi^\star(s_o,a)=\phi^\star(s_{o,j},a)=(0,\ldots,0,\frac{1}{S-d}),\forall a\in \mathcal{A}$ $\phi^\star(s_g,a)=\mu^\star(s_g),\forall a\in \mathcal{A}$ $\phi^\star(s_b,a)=\mu^\star(s_b),\forall a\in \mathcal{A}$ Based on the reference MDP $\mathcal{M}_0$, we define other low-rank MDP instances $\mathcal{M}_{(i^\ast,a^\ast)}$, $\forall (i^\ast,a^\ast)\in[d-4]\times \mathcal{A}$. In specific, the only difference between $\mathcal{M}_{(i^\ast,a^\ast)}$ and $\mathcal{M}_0$ is that $\phi^\star(s_{2,i^\ast},a^\ast)=(0, \ldots, 0, \frac{1}{2}+\varepsilon, \frac{1}{2}-\varepsilon, 0,0)$, such that $P^\star\left(s_g \mid s_{2,i^\ast}, a^\ast\right)=\frac{1}{2}+\varepsilon$, $P^\star\left(s_b \mid s_{2,i^\ast}, a^\ast\right)=\frac{1}{2}-\varepsilon$, for some $\varepsilon>0$ to be defined later. In what follows, we denote by $\mathbb{P}_{\left(i^*, a^*\right)} \triangleq$ $\mathbb{P}_{\operatorname{Alg},\mathcal{M}_{\left(i^*, a^*\right)}}$ the probability measure over the outcomes induced by the interaction between $\operatorname{Alg}$ and $\mathcal{M}_{\left(i^*, a^*\right)}$, (please see the remaining proof in our 'global' response in the above) --- Rebuttal Comment 1.1: Title: About the equations that were not successfully rendered Comment: We now rewrite the sentences containing the equations in the proof of Theorem 1 that were not successfully rendered: * where $\mathcal{S}\_{\mathcal{O}}=\\{s\_{o,i}\\}_{i=1}^{S-d}$ denotes the set of 'outlier states', * we denote by $\mathbb{P}\_{\left(i^*, a^*\right)} \triangleq \mathbb{P}\_{\operatorname{Alg}, \mathcal{M}\_{\left(i^*, a^*\right)}}$ the probability measure over the outcomes induced by the interaction between $\operatorname{Alg}$ and $\mathcal{M}_{\left(i^*, a^*\right)}$,
Summary: This work studies low-rank MDPs with unknown and fixed transition and full-information adversarial losses. The proposed algorithm generalize RepUCB from the fixed reward setting to the adversarial reward setting. The main idea of the algorithm is to replace the greedy policy in RepUCB to an incremental policy update with exponential weights. Also, the proposed algorithm interleaves exploration and exploitation to obtain low regret. Strengths: - This is an elegant generalization of RepUCB to the adversarial setting. It is also the first paper that simultaneously deals with non-linear function approximation and adversarial losses. Weaknesses: - The technical novelty is a bit limited -- Given OPPO and POWERS, it becomes more or less clear that the key technique for unknown transition + full-information loss feedback setting is to ensure optimism and use a no-regret algorithm over the optimistically-biased losses. Though the form of optimism in low-rank MDP (i.e., optimism only in the initial state) is slightly different from those in linear mixture MDP, with full-information loss feedback, the key challenges is already resolved in RepUCB. It is not surprising that most of the analysis follows those of RepUCB except for the OMD regret term. - The proposed algorithm performs an epsilon-greedy styled exploration, which leads to a sub-optimal $K^{3/4}$ regret bound. To improve the technical novelty, I think the paper should try to design some less trivial exploration scheme and improve over the $K^{3/4}$ bound. Another direction to improve the technical novelty is to study the bandit feedback case. - The comparison with Rep-UCB around Line 259-268 and the comment that your algorithm is tighter in $d, A, \gamma$ is slightly off. The reason is that your bound is better than RepUCB only when $K^{3/4}A^{1/2}d/(1-\gamma) < d^{4/3}A^{2/3}K^{2/3}/(1-\gamma)^{4/3}$, which is equivalent to $K<d^4A^2/(1-\gamma)^4$. In this regime, both regret bounds $K^{3/4}A^{1/2}d/(1-\gamma)$ and $d^{4/3}A^{2/3}K^{2/3}/(1-\gamma)^{4/3}$ are at least $K$, meaning that this is an *vacuous* regime where both regret bounds are meaningless. In other words, there is no case your regret bound is better than that of RepUCB. If we calculate the sample complexity, the two algorithms would give $\frac{A^2d^4}{(1-\gamma)^4\epsilon^4}$ and $\frac{A^2d^4}{(1-\gamma)^4\epsilon^3}$ respectively, so there is actually no improvement in terms of $d, A, (1-\gamma)$. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the above sections. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: There is no societal limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable comments and suggestions. Our response to each question is provided in turn below. **Q1. "The technical novelty is a bit limited."** As previous works studying adversarial linear mixture MDPs using policy-optimization (PO) based methods [1,2] relying upon some ingredients of the analyses in stochastic linear mixture MDPs [3,4], our work also relies upon some ingredients of the analyses in stochastic low-rank MDPs [5]. Besides, we would also like to note that our work is not a straightforward combination of Rep-UCB and previous PO-based methods, and new ingredients are needed to achieve our regret guarantee, due to the technical issue which is unique to low-rank MDPs. In specific, as discussed in Section 4.1 and 5.1, the original Rep-UCB algorithm only has a sample complexity guarantee but does not enjoy a regret guarantee in both stochastic and adversarial environments, due to its uniform exploration in each episode. Moreover, a common ETC-style adaption of Rep-UCB also only attains an $\widetilde{O}(K^{2/3})$ regret in stochastic environments, but committing to a fixed loss function makes it still fail to be extended to adversarial environments. Therefore, directly applying previous PO-based methods [1,2] together with the current Rep-UCB algorithm does not go through our problem. Further, to tackle this issue, our algorithmic design features a new *doubled exploration and exploitation* scheme, which leverages a mixed roll-out policy to simultaneously conduct (a) the uniform exploration over transitions required by representation learning; and (b) the exploration and exploitation over adversarial loss functions. As discussed in Section 4.1 and 5.1, this is critical to achieving our final regret guarantee. Moreover, we now provide a regret lower bound for learning low-rank MDPs, which is the first regret lower bound for low-rank MDPs and together with our regret upper bound provides a more complete characterization of regret minimization for low-rank MDPs. Please see this in our response to reviewer n9xa. **Q2. "The proposed algorithm performs an epsilon-greedy styled exploration...To improve the technical novelty..."** Indeed, our doubled exploration and exploitation scheme is similar in spirit to the epsilon-greedy-styled exploration. However, we would also like to note that actually, they are inherently not the same, since the epsilon-greedy styled exploration will only conduct exploitation in certain rounds but our doubled exploration and exploitation scheme will always conduct exploration in all rounds, which will be either the exploration required by representation learning or the exploration (and exploitation) required to tackle the adversarial loss functions. As we discussed in Section 6, we do believe further optimizing the current dependence on $K$ and studying the bandit feedback case are interesting and important next steps. However, we would like to remark that our $\widetilde{O}(K^{3/4})$ regret guarantee obtained by the PO-based method is valuable and meaningful, especially considering that it is the first result for RL with both nonlinear function approximation and adversarial loss functions. In fact, the SOTA regret guarantee obtained by the PO-based method for the more amenable adversarial linear MDPs with given underlying feature mappings in the full-information setting is $\widetilde{O}(K^{3/4})$, as shown by a concurrent work [6]. Our algorithm obtains the regret guarantee with the same dependence on $K$ and can additionally work when no true feature mappings are known a priori. For the bandit feedback case, due to the local-search nature of PO-based methods, the current PO-based methods for adversarial linear MDPs [7,8,9] depend on the *dilated* exploration bonuses (and its variants), which critically rely on the given underlying true feature mappings, to facilitate global exploration. These PO-based methods are thus not applicable to our problem, with no true feature mappings given a priori. Moreover, we would like to note that, even with the true feature mappings given, learning adversarial linear MDPs with bandit feedback is already rather challenging, where the SOTA regret achieved by the PO-based method is only of $\widetilde{O}(K^{6/7})$ order [9] when no exploration conditions are given. Overall, learning adversarial low-rank MDPs with bandit feedback is indeed interesting and also particularly challenging, and we believe our work may serve as an important step to further tackle this problem, which we leave as our future study. **Q3. "The comparison ... is slightly off."** Thank you again for pointing this out. Though we mainly focus on learning low-rank MDPs in adversarial environments, the comparison between the regret guarantee of our algorithm and that of the ETC-style adaption of Rep-UCB in stochastic environments is indeed not accurate, and we will improve our statement on Line 264-266 accordingly in the revised version of our paper. [1] Cai et al. Provably Efficient Exploration in Policy Optimization. ICML, 20. [2] He et al. Near-optimal Policy Optimization Algorithms for Learning Adversarial Linear Mixture MDPs. AISTATS, 22. [3] Ayoub et al. Model-Based Reinforcement Learning with Value-Targeted Regression. ICML, 20. [4] Zhou et al. Nearly Minimax Optimal Reinforcement Learning for Linear Mixture Markov Decision Processes. COLT, 21. [5] Uehara et al. Representation learning for online and offline RL in low-rank mdps. ICLR, 22. [6] Zhong et al. A Theoretical Analysis of Optimistic Proximal Policy Optimization in Linear Markov Decision Processes. arXiv, 23. [7] Luo et al. Policy optimization in adversarial mdps: Improved exploration via dilated bonuses. NeurIPS, 21. [8] Dai et al. Refined Regret for Adversarial MDPs with Linear Function Approximation. ICML, 23. [9] Sherman et al. Improved Regret for Efficient Online Reinforcement Learning with Linear Function Approximation. ICML, 23. --- Rebuttal Comment 1.1: Comment: I agree that since the setting is not studied before, any result would be beneficial for future study, even though it's not close to optimal. I have raised my score. --- Reply to Comment 1.1.1: Comment: Thank you for your positive feedback and support for our work! We are also more than happy to answer any further questions.
Rebuttal 1: Rebuttal: **Proof of the Regret Lower Bound (Cont.)** and by $\mathbb{E}_{\left(i^*, a^*\right)} \triangleq$ $ \mathbb{E}_{\operatorname{Alg}, \mathcal{M}_{\left(i^*, a^*\right)}}$ the expectation with respect to $\mathbb{P}_{\left(i^*, a^*\right)}$. **Step 1: Regret of $\operatorname{Alg}$ over $\mathcal{M}_{(i^\ast,a^\ast)}$** For some $\mathcal{M}_{(i^\ast,a^\ast)}$, its optimal policy $\pi^\ast_{(i^\ast,a^\ast)}:\mathcal{S}\to\mathcal{A}$ satisfies that $\pi^\ast_{(i^\ast,a^\ast)}(s_{1,1})=a_{i^\ast}$ and $\pi^\ast_{(i^\ast,a^\ast)}(s_{2,i^\ast})=a^\ast$, with the optimal value function $$ \begin{align} V_{0}^*(s_{1,1}) & =\mathbb{E}\left[\sum_{\tau=0}^{+\infty} \gamma^\tau r(s_{\tau}, a_\tau) \mid \pi^\ast_{(i^\ast,a^\ast)},P^\star_{(i^\ast,a^\ast)}, s_0=s_{1,1}\right]=\sum_{\tau=2}^{+\infty} \gamma^\tau \left(\frac{1}{2}+\varepsilon\right)=\frac{\gamma^2}{1-\gamma}\left(\frac{1}{2}+\varepsilon\right).\tag{1}\label{eq:lb_eq1} \end{align} $$ For some policy $\pi$, it is also clear that its value function satisfies $V_{0}^\pi(s_{1,1})$ $$ =\frac{\gamma^2}{1-\gamma}(\frac{1}{2}+\varepsilon\mathbb{P}_{(i^*, a^*)}((s_2,a_2)=(s_{2,i^*},a^*))).\tag{2}\label{eq:lb_eq2} $$ Combining Eq. $\eqref{eq:lb_eq1}$ and Eq. $\eqref{eq:lb_eq2}$ shows that the regret of $\operatorname{Alg}$ in $\mathcal{M}_{(i^\ast,a^\ast)}$ satisfies $R_K(\operatorname{Alg}, \mathcal{M}_{(i^\ast,a^\ast}))$ $$ =\frac{\gamma^2\varepsilon}{1-\gamma}K\left(1-\frac{1}{K}\mathbb{E}_{(i^*,a^*)}\left[N^K_{(i^*,a^*)} \right]\right)\notag\,, $$ where we define $N^K_{(i^\ast,a^\ast)}= \sum_{k=1}^K\mathbb{I}\{(s^k_2,a^k_2)=(s_{2,i^\ast},a^\ast)\}$. **Step 2: Maximum regret of $\operatorname{Alg}$ over all possible $\mathcal{M}_{(i^\ast,a^\ast)}$** With $R_K(\operatorname{Alg}, \mathcal{M}_{(i^\ast,a^\ast}))$ in the above equation, we can deduce that $$ \begin{align} \max_{(i^\ast,a^\ast)}R_K(\operatorname{Alg},\mathcal{M}_{(i^\ast,a^\ast)})&\geq \frac{1}{(d-4)A}\sum_{(i^\ast,a^\ast)}R_K(\operatorname{Alg},\mathcal{M}_{(i^\ast,a^\ast}))\notag\\ &\geq \frac{\gamma^2\varepsilon}{1-\gamma}K\left(1-\frac{1}{K(d-4)A}\sum_{(i^\ast,a^\ast)}\mathbb{E}_{(i^\ast,a^\ast)}\left[N^K_{(i^\ast,a^\ast)} \right]\right)\,.\tag{3}\label{eq:lb_eq3} \end{align} $$ To lower bound the above display, it remains to upper bound $\sum_{(i^*,a^*)}\mathbb{E}_{(i^*,a^*)}[N^K_{(i^*,a^*)}]$. To this end, by Lemma 1 in the work of [4] together with the fact that $N^K_{(i^\ast,a^\ast)}/K\in[0,1]$, it holds that $$ \begin{align*} \operatorname{KL}\left(\operatorname{Ber}\left(\frac{1}{K}\mathbb{E}_0\left[N^K_{(i^\ast,a^\ast)}\right]\right),\operatorname{Ber}\left(\frac{1}{K}\mathbb{E}_{(i^\ast,a^\ast)}\left[N^K_{(i^\ast,a^\ast)}\right]\right)\right)\leq \operatorname{KL}\left(\mathbb{P}_0,\mathbb{P}_{(i^\ast,a^\ast)}\right)\,. \end{align*} $$ This implies that $$ \begin{align*} \frac{1}{K}\mathbb{E}_{(i^\ast,a^\ast)}\left[N^K_{(i^\ast,a^\ast)}\right] &\leq \frac{1}{K}\mathbb{E}_0 \left[N^K_{(i^\ast,a^\ast)}\right]+\sqrt{\frac{1}{2}\operatorname{KL}\left(\mathbb{P}_0,\mathbb{P}_{(i^\ast,a^\ast)}\right)}\\ &=\frac{1}{K}\mathbb{E}_0 \left[N^K_{(i^\ast,a^\ast)}\right]+\varepsilon\sqrt{2}\sqrt{\mathbb{E}_0\left[N^K_{(i^\ast,a^\ast)}\right]}\,, \end{align*} $$ where the inequality is due to Pinsker’s inequality that $(p-q)^2 \leq \frac{1}{2} \operatorname{KL}(\operatorname{Ber}(p), \operatorname{Ber}(q))$, for $p,q\in[0,1]$, and the equality comes from Lemma 15.1 of [1] and Lemma 14 of [2] as well as assuming $0\leq\varepsilon\leq\frac{1}{4}$. Based on this, one can see that $$ \begin{align} \frac{1}{K}\sum_{(i^\ast,a^\ast)}\mathbb{E}_{(i^\ast,a^\ast)}\left[N^K_{(i^\ast,a^\ast)}\right] &\leq \frac{1}{K}\sum_{(i^\ast,a^\ast)}\mathbb{E}_0 \left[N^K_{(i^\ast,a^\ast)}\right]+\varepsilon\sqrt{2}\sum_{(i^\ast,a^\ast)}\sqrt{\mathbb{E}_0\left[N^K_{(i^\ast,a^\ast)}\right]}\notag\\ &\leq 1+\varepsilon\sqrt{2}\sqrt{(d-4)AK}\,,\tag{4}\label{eq:lb_eq4} \end{align} $$ where the second inequality follows from using the Cauchy-Schwartz inequality together with the fact that $N^K_{(i^\ast,a^\ast)}\leq K$. **Step 3: Optimizing $\varepsilon$ to lower bound the maximum regret** Substituting Eq. $\eqref{eq:lb_eq4}$ into Eq. $\eqref{eq:lb_eq3}$ leads to $$ \begin{align*} \max_{(i^\ast,a^\ast)}R_K(\operatorname{Alg},\mathcal{M}_{(i^\ast,a^\ast)}) &\geq\frac{\gamma^2\varepsilon}{1-\gamma}K\left(1-\frac{1}{(d-4)A}-\varepsilon\sqrt{2}\sqrt{\frac{K}{(d-4)A}}\right)\\ &\geq\frac{1}{4\sqrt{2}}\cdot\frac{\gamma^2}{1-\gamma}\left(1-\frac{1}{(d-4)A}\right)^2\sqrt{(d-4)AK}\\ &\geq \frac{361}{1600\sqrt{2}}\cdot\frac{\gamma^2}{1-\gamma}\sqrt{(d-4)AK}\,, \end{align*} $$ where the second inequality comes from by choosing $\varepsilon=\frac{1}{2\sqrt{2}}\left(1-\frac{1}{(d-4)A}\right)\sqrt{\frac{(d-4)A}{K}}$ and the last inequality is due to $d\geq 8$ and $A\geq d-3$. Finally, note that $\varepsilon\leq \frac{1}{4}$ is guaranteed when $K\geq 2(d-4)A$. The proof is thus concluded. **Q.E.D.** [1] Lattimore et al. Bandit algorithms. Cambridge University Press, 20. [2] Domingues et al. Episodic Reinforcement Learning in Finite MDPs: Minimax Lower Bounds Revisited. ALT, 21. [3] Cheng et al. Improved Sample Complexity for Reward-free Reinforcement Learning under Low-rank MDPs. ICLR, 23. [4] Garivier et al. Explore first, exploit next: The true shape of regret in bandit problems. Mathematics of Operations Research, 19. [5] He et al. Nearly Minimax Optimal Reinforcement Learning for Linear Markov Decision Processes. ICML, 23.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
PyNeRF: Pyramidal Neural Radiance Fields
Accept (poster)
Summary: This works tackles the problem of antialiasing in grid based Neural Radiance Field representations (e.g. INGP, DirectVoxGo). To this end, a very simple solution is proposed: *instead of training a single NeRF with multiscale features, train separate NeRFs at different resolutions and decide which to use based on the distance of the sample from the camera (size of the pixel footprint)*. Specifically, two levels (closest smaller and larger) are used to predict the color and density, which are then linearly combined into a single prediction per sample along the ray. This requires one additional MLP evaluation per each sample, but is still much more efficient than the multisamples proposed e.g. in ZipNeRF. The proposed method is evaluated on the Multiscale Blender and Multiscale Mip-Nerf360 datasets, where it consistently outperforms the state of the art. Strengths: In my opinion, the main strength of this paper is its simplicity. The proposed idea is very general and could easily be implemented in the grid based representation at only a minor cost (one additional MLP evaluation per each sample). The increased storage is also not a large problem, as shown in the supplementary, as features can be shared across level resulting only in the overhead of MLP weights (very small compared to the feature volumes). I really like such simple ideas that lead to good performance improvement. The paper is also well presented and easy to understand, the ablation studies support the main ideas (Pyramid training, simplification of the Laplacian pyramid). I especially liked the analysis in the supplementary material (it is maybe unfortunate to have that only in the supplement and could maybe be moved to the main paper for the camera ready version). Weaknesses: I only have one minor weakness - sometimes some details are missing, or the description is slightly ambiguous. For example, the number of levels, size of the features per level, MLP dimensions are missing. In the supplementary, it is not fully clear to me what the shared INGP table denotes? In the original INGP each level has its own hash-table, does this imply that a single one is used for all level (what is the dimension and number of the features in this hash-table?) Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: **Comments - potential improvement**: - The proposed method is very general, but it is only evaluated on the INGP representation. While TensorRF is used in the ablation in the supplementary, it is not clear if the proposed multilevel training also improves its performance. It would be great to support the claims of the generality by applying the ideas to at least one more representation. **Questions**: - The gradients for each camera sample are only propagated back to 2 levels. If the camera poses are very different, this could mean that in some areas the finest levels are never supervised - what happens if the novel view is closer than camera of the training view? It would mean that the levels that are sampled were never supervised or? - In the representation in the main paper, I don't fully understand why the storage requirements are larger? Is it because there is no concatenation of the features, and hence the feature vectors of each level are of higher dimension? It would be good to clarify this - If I understand correctly, the method proposed in the supplementary with the shared features across the levels is even simpler (only adds L-1 MLPs to the formulation) and achieves comparable performance. Is there are good reason to not make that the main method? I guess it is not trivial to share the features in the methods that store them explicitly (tensorRF or DirectVoxGo). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors describe the limitations of their method and potential negative societal impacts adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and for the helpful comments! **Generalization.** With regards to improvements being marginal for other backbones such as TensoRF, we admittedly focused on hashtable approaches for our original submission. As the authors of TensoRF state, it is mainly designed for bounded scenes [1] and our initial evaluation of TensoRF on unbounded 360 views yielded poor results. In response to reviewer concerns, we have added optimizations not present in the original TensoRF method (such as Mip-NeRF 360's scene contraction [2]), improving the baseline TensoRF from 14.75 to 17.21 dB PSNR, and now see a large improvement when combined with PyNeRF (21.35 dB). We also ran experiments with K-Planes which show similar improvements with PyNeRF (2–6 dB gain in PSNR). We have included updated tables in our rebuttal PDF. **Unsupervised levels.** You are correct that one of the limitations of PyNeRF is that performance will degrade when zooming in and out of areas that have not been seen at training time. "Zoom-out" (rendering far-away views) can be handled by simulating far-away views during training, by simply adding downsampled training images. "Zoom-in" (rendering nearby views) is fundamentally challenging since one cannot easily simulate such views without hallucination-based methods such as super-resolution. Instead, we can force renderings to query only the finest level that was supervised at train-time (by maintaining an occupancy grid-like data structure), which should result in the same blurry artifacts that typical NeRF approaches exhibit during excessive "zoom-in". We will note these limitations in the camera-ready paper. **Storage.** To clarify why storage requirements are larger with our base method, each level is backed by an entirely separate feature grid (which in the case of iNGP happens to be a multi-resolution grid). In the general case (which doesn't assume anything about the underlying storage of the NeRF backbone), we create a separate table for each level (up to the target resolution of the level). In the case of iNGP, this is redundant and we can just use a single multi-resolution table for all levels. As you correctly assumed, we don't do this in the general case since this does not apply to backbones such as TensoRF or Plenoxels. That said, the use of a multi-resolution backbone is particularly natural for our method, and so we plan to present both the general and storage-optimized variant (for multi-resolution backbones) in the main paper. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: Thank you for clarify some details, I think that adding the discussion on the zoom in/out effect to the supplementary would help strenghten the paper. **Storage**: This is still somewhat confusing to me, but let me see if I understand correctly. Let's assume an INGP configuration with 16 levels, the original works will store 16 distinct hash tables (one per each level) with e.g. $2^{19}$ features each of dimension 2. When querying the features, features of each level will be interpolated trilenearly first and then concatenated to a 32 dim feature vector (16 levels of dim 2). If I understand correctly in your base case you have the following hash tables: for grid resolution 1 (1 hash tables with dim xx), for grid resolution 2 (2 hash tables with dim xx), ... for grid resolution 16 (16 hash tables with feature dim xx)? This would also mean that the dimension of the input to the MLP changes based on the level? What is the xx in this case? In the storage optimized version, you instead have the same as INGP, i.e. 16 distinct hash tables (one per each level) with e.g. $2^{19}$ features each of dimension 2. And you only index into a different MLP based on the grid resolution? --- Reply to Comment 1.1.1: Comment: Thanks for your response! Your understanding of storage for the base case is largely correct, but with two minor modifications. First, because very few ray samples map to the coarsest grid solutions, we chose to build separate grid resolutions for only the finest levels 8 - 16 rather than all levels 1 - 16. In practice, when sampling a 3D point along a ray that should map to a coarse level 1-7, we naively query level 8. We train separate MLPs per grid resolution - the first MLP would get 2\*8=16 features as input, the second would get 2\*9=18 features as input, etc. Second, we explored a variant where each of the 8 PyNeRF grid resolutions is backed by an internal 16-level multiresolution hash table with smaller scale factors. Here, the input feature size to each MLP is always fixed to 2*16=32. This variant performs slightly better at the cost of more storage (and is the base case reported in the paper). But importantly, the difference is minor and the storage-optimized version provides the best performance-storage tradeoff. Your understanding of the storage-optimized version is correct! Thank you once more for the questions - adding these clarifications will hopefully make for a stronger paper!
Summary: PyNeRF replaces the implicit representation in Mip-NeRF with a voxel-based representation method, which combines the cone sampling method and explicit structural representation by interpolating on different voxels based on coordinates of different scales. This method can be easily applied to existing accelerated NeRF methods. PyNeRF improves training speed by 60 times compared to Mip-NeRF while ensuring anti-aliasing effects and reducing errors by 20%. Strengths: * Experiments are performed on multiple datasets, including synthetic, real, and large-scale datasets, which demonstrate the superior performance and training speed of the proposed method. PyNeRF combines the anti-aliasing capability of MiP-NeRF with the fast fitting of explicit structure, showcasing the advantages of the two methods. * Three strategies are adopted for the interpolation method (GaussPyNeRF, LaplacianPyNeRF, pyNeRF), demonstrating that the PyNeRF approach can save storage, improve rendering speed, and ensure the same rendering quality as interpolating on multiple voxels. This verifies the rationality of PyNeRF's interpolation strategy. Weaknesses: * The paper states that the proposed method can be applied to existing accelerated NeRF approaches (L9). However, the results show only slight improvements when applied to other acceleration methods (PyNeRF - TensoRF-CP, PyNeRF - TensoRF-VM in Table 5 of the supplementary). It appears that the use of hash tables is the only approach that shows significant improvement. * Although the multi-scale sampling and interpolation methods in this paper play a role in anti-aliasing and accelerating training speed, they also increase the consumption of storage space. According to Table 5, using a shared hash table can achieve the same performance, so why not use a shared hash table? This would save storage space. * Moreover, adopted methods, e.g. multiscale sampling and multi-resolution voxels, have been proposed and widely used in many existing methods, which makes the novelty somewhat limited. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: * Further analysis is needed to demonstrate why the gain on TensoRF is marginal. * The feasibility of the method can be tested on more accelerated NeRF approaches. The storage size of this method may be the main factor that impedes its wider application. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reading our paper and for the constructive feedback. **Contribution.** To the best of our knowledge, our work is among the first to combine fast NeRF rendering with anti-aliasing. We agree that testing on more accelerated NeRF approaches would improve our paper. To that effect, we present additional results with K-Planes in the global rebuttal PDF which show a 2–6 dB improvement over the standard K-Planes baseline. Per your suggestion, we also reexamined our TensoRF results. As its authors state, it is designed for bounded scenes [1] and our initial results on unbounded scenes were especially poor. By adding optimizations not present in the original TensoRF method (namely Mip-NeRF 360's scene contraction [2]), we're able to improve baseline TensoRF performance (from 14.75 to 17.21 dB PSNR on unbounded scenes) and obtain a large improvement when combined with PyNeRF (21.35 dB). We will add these updated numbers and experiment details to the camera-ready version. **Storage.** As noted in the Limitations section, we agree that storage space is a potential tradeoff of our approach. Our intent with the shared hashtable approach in the supplement was to provide a mitigation for methods that use multi-resolution structures (iNGP, K-Planes), at the cost of generalizability to those that don't (TensoRF, Plenoxels). We will make this more clear in the revised paper by presenting both the general PyNeRF method and the storage-optimized version (for multi-resolution backbones) in the main paper. References: [1] Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, and Hao Su. 2022. TensoRF: Tensorial Radiance Fields. In ECCV 2022. [2] J. T. Barron, B. Mildenhall, D. Verbin, P. P. Srinivasan, and P. Hedman. Mip-NeRF 360: Unbounded anti-aliased neural radiance fields. In CVPR, 2022. --- Rebuttal Comment 1.1: Title: Thanks and Final Rating Comment: Thanks for the authors' rebuttal which has addressed my concerns. I would like to raise my rating. Besides, it is highly recommended to include the additional results and contents in the final version.
Summary: The authors introduce a pyramidal radiance field reconstruction method, which reuses multi-scale feature grid representation and area matching algorithm for level indexing. Specifically, the method trains a pyramid of models at different scales and interpolates point features between neighboring levels determined by the cone size to which the point belongs to. The effectiveness of the proposed method is evaluated on the multi-resolution version of the blender, ADOP and Argoverse sensor dataset. The experiments show that the proposed method is able to provide better rendering quality while preserving high reconstruction speed. Strengths: 1. The motivation is clear. The manuscript addresses an interesting and important anti-aliasing problem for grid-based representation. It's good to a paper on that. 2. The manuscript is well-written. Figure 1 and algorithm 1 are very helpful for understanding the manuscript, good job. 3. The quantitive results reported in the tables are quite good. Weaknesses: 1. The quantitive result on the standard resolution is missing, such as single resolution for nerf synthetic, mipnerf 360, leading it hard to judge the general performance. 2. The optimization progress is unclear to me. Is the different level feature optimized with the specific scale of the dataset (L113-115) or supervised only on the final rendering (L129)? 3. Experiment details are missing, such as level numbers used for each dataset, the grid resolutions, and the feature channel, representation (dense grid?). 4. Since the manuscript is focused on anti-aliasing, but I could not find the zoom-in results in the appendix video, are there flickers when rendering sequences? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Can this representation be trained and evaluated on a standard multi-view dataset? 2. Table 1 and 2, any insights on why the quantitive (Plenoxels, K-Planes, TensoRF and iNGP)) and the rendering video of iNGP is much worse than the result reported in the original papers? Is that because they are evaluated on multi-resolution? In this case, I would expect the rendering quality to be the same. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, the authors provide a limitation section and make sense to me. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reading our work and for the constructive feedback! **Single vs multi-resolution.** You ask why existing methods such as iNGP perform worse on multi-resolution datasets. Mip-NeRF [1] originally points out that prior work struggles on scenes where the same scene content is viewed from different distances across training and test cameras. For scenes when the camera distance remains roughly constant, PyNeRF performs similarly to existing SOTA, as shown in Table 3 in our rebuttal PDF. mip-NeRF [1] simulates views from different distances by adding 2×–8× downsampled images to training and test sets. Similar to mip-NeRF [1], our rebuttal PDF includes performance numbers on each individual downsampling level in Tables 1 and 2. **Optimization.** You ask whether we train different pyramid levels separately or jointly with the final rendering loss (L129). We train jointly; different samples along a camera ray typically map to different levels of the hierarchy (Figure 3(a)), but all samples are composited into a single color that is supervised with a per-pixel L2 loss. **Experiment details.** We will add more details to our camera-ready version. Our main experiments use a PyNeRF hierarchy of 8 levels corresponding to resolutions 128 to 16,384. Each level of the hierarchy is backed by a feature hashtable (same as iNGP), followed by small density and color MLPs (1 layer/64 channels and 2 layers/128 channels, respectively). We will commit to releasing code/data upon acceptance to aid reproducibility. We will also add videos with zoom in effects (which do not exhibit flickering). References: [1] J. T. Barron, B. Mildenhall, M. Tancik, P. Hedman, R. Martin-Brualla, and P. P. Srinivasan. Mip-NeRF: A multiscale representation for anti-aliasing neural radiance fields. In ICCV, 2021 --- Rebuttal Comment 1.1: Title: final rating Comment: Thanks for the rebuttal and clarification, I am leaning toward the original "Weak Accept" rating.
Summary: This work presents a method for anti-aliased renderings for grid-based NeRF representations by jointly optimizing a hierarchy of coarse-to-fine grids. The idea is neat and well justified by empirical evaluations that show quantitative and qualitative improvements over baselines. Strengths: 1. The proposed method seems simple and easy to reproduce. 2. Experiments are extensive, and demonstrate the effectiveness of the proposed method. Weaknesses: 1. The method is built upon the nerfstudio library. But the empirical results of that base model seems missing. Would be great to include those results to help better see the improvements made by the proposed anti-aliasing techniques. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. It says that the method is implemented on top of the nerfstudio library. Which specific model was built upon? Is it the nerfacto one, or others? 2. What do the numbers 11.2,8.6,5.7 mean in Fig. 3(a)? 3. Is the MLP shared across the different mipmap levels? What is the MLP size? 4. Does the method apply to other grid-based representations like TensoRF, or K-planes? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing our paper! We’re glad that you appreciate the simplicity and effectiveness of our approach. **Nerfstudio.** Our implementation is indeed built on the library (we will release our code as a plugin) and is closest to Nerfacto. We list comparison numbers in the tables attached as part of our global response which we will add to our camera ready. We show a 1-7db improvement in PSNR. **MLPs.** We use separate MLPs per level. We use a 64 channel density MLP with 1 hidden layer followed by a 128 channel color MLP with 2 hidden layers. We use the same size MLPs across the iNGP/K-Planes/Nerfacto baselines for fairness. **Other backbones.** A strength of our approach is its applicability to other grid-based representations - we list results with a K-Planes backbone as part of our global rebuttal which shows a significant improvement over its baseline (2-6 db PSNR) and report a 4-6 db improvement in PSNR when using TensoRF. **Fig 3(a).** The numbers are to illustrate the different hierarchy levels each ray sample could fall under (Equation 1 in the paper) - we will clarify.
Rebuttal 1: Rebuttal: We are glad that reviewers agree that we address "an interesting and important anti-aliasing problem," (JoFm), appreciate "simple ideas that lead to good performance improvement," (LTbu), and acknowledge that "experiments are extensive, and demonstrate the effectiveness of the proposed method." (PaSq) Two common concerns reviewers raised were that our contribution would be stronger with additional results on non-iNGP backbones (PaSq, pHej, LTbu) and that we should include additional details on our model architecture and experimental setup (LTbu, JoFm). We present additional experiments with K-Planes and TensoRF backbones in the attached PDF (Tables 1 and 2). Both variants show clear improvements over their base methods. We will gladly add more model and hyperparameter details to the camera ready and commit to releasing code and data to address any remaining reproducibility concerns. Pdf: /pdf/ad197f10ef275ad45a8caa9fcb8d459a8e36d755.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper presents a method to address the aliasing artifacts in grid-based NeRF. It introduces a pyramid of grids of different resolutions to represent a scene. To query the color and density of a 3D point with a certain integration volume, the method finds the two pyramid levels that best describe the point and performs a weighted combination between the outputs of the two levels. Experiments on both synthetic and real datasets demonstrate the advantages of PyNeRF over previous methods in rendering quality and speed. Strengths: - How to address alias artifacts is an important problem in NeRF. While mip-NeRF proposes an elegant solution for positional-encoding-based NeRF, extending this to grid-based NeRF is not straightforward. This paper presents a reasonable and novel approach to address the aliasing issue in grid-based NeRF. This is achieved by maintaining a grid pyramid with different levels of resolutions and only using suitable levels during rendering. - The proposed PyNeRF shows clear advantages over previous approaches, as demonstrated in experiments. The rendering accuracy is significantly better than other grid-based NeRF and in most cases better than mip-NeRF. Qualitative results also show that PyNeRF better captures fine details. The running speed is on par with other grid-based NeRF and is much faster than MLP-based NeRF. - The effectiveness of the proposed design choice is verified in ablation studies. Weaknesses: - The technical contribution is weak, as the proposed method is rather simple and does not involve much technical challenge. The iNGP already has a multi-resolution grid and this work simply chooses to use the two suitable for the scale of the point. Although this strategy is shown to be effective, I doubt if the technical contribution is enough to be presented at NeurIPS. - Some descriptions are not clear or reasonable. For example: - The sentence on Line 121 and 122 is unclear. It's not well explained how P(x) is calculated. It seems to be the projected area from the voxel to the 2D plane. If this is the case, then it is a somewhat odd design. A voxel is a cube, thus its projected area to the 2D plane is view-dependent. But the selection of scale level should not depend on view but depend on the distance between the 3D point and the camera. - The first version of the method is called GaussPyNeRF. However, it is not explained how the method is related to Gaussian. I do not see a clear connection. - Line 156: "We can quickly train these along with our initial low-resolution model and then use them to train higher-resolution levels in a sample-efficient manner." It's not clearly described how this is done. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Authors can respond to my concerns mentioned in weaknesses. - Line 155: "many contemporary NeRF methods use occupancy grids or proposal networks to generate refined samples near surfaces". I suggest adding related citations here. - Line 35: fast-rendering approach -> grid-based approach. Fast rendering includes not only grid-based approaches but also others like point-based approaches. This paper targets grid-based approaches. - The title does not mention anti-aliasing, which is the main problem to be solved in this paper. A minor suggestion is to mention it in the title. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and for the valuable feedback! **Contribution.** Rather than seeing the simplicity of our method as a weakness, we humbly agree with Reviewer LTbu's statement that "the main strength of this paper *is* its simplicity". As you state, our simple method significantly outperforms the state of the art. **Lines 121-122.** We calculate P(x) solely based on the distance between the 3D point and camera similar to Nerfstudio's utility method [1], and then compare it to the resolution hierarchy described in lines 115-116. For example, for a point along the camera ray with a projected pixel area of 0.125, and a PyNeRF hierarchy corresponding to resolutions [2, 4, 8, 16], we query the third level of the hierarchy (with resolution 1 / 8 = 0.125). We will clarify in the camera-ready. **GaussPyNeRF.** Our intent was to draw an analogy between the resolution and details captured by the different levels in our NeRF hierarchy (Fig. 2) and those characteristic of Gaussian pyramids used in classic image processing [2]. We will clarify. **Line 155.** SOTA methods either use a proposal network (Mip-NeRF 360, Nerfacto, SUDS, K-Planes) or an occupancy grid (iNGP) to learn a coarse 3D representation and guide sampling near surfaces (vs wastefully querying empty space). Occupancy grids / proposal networks are usually trained jointly with the main NeRF network, which takes significant time at city scale. Selectively training in a coarse-to-fine manner with the low-resolution PyNeRF levels as described in Sec. 4.4 speeds up convergence. **Line 35.** Agreed - we will amend! References: [1] https://github.com/nerfstudio-project/nerfstudio/blob/9b6010ea0d20e7ca2a68496bb33d6dcd03bf9b91/nerfstudio/cameras/cameras.py#L849 [2] E. Adelson, C. Anderson, J. Bergen, P. Burt, and J. Ogden. Pyramid methods in image processing. RCA250 Eng., 29, 11 1983. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: Thank you for the clarifications. The rebuttal addresses most of my concerns and I remain positive about this paper. Please make these revisions accordingly in the revised version.
null
null
null
null
null
null
Encoding Human Behavior in Information Design through Deep Learning
Accept (poster)
Summary: The study demonstrates commendable efforts in employing supervised learning techniques and utilizing Amazon Mechanical Turk to acquire data to develop a human behavior descriptor. The authors further utilize neural networks to optimize the sender's signaling scheme based on the fitted human decision-making model, aiming to maximize their expected returns. The experiments conducted are extensive and encompass a broad range of participants that avoids bias. Strengths: The work is one of the early works that attempt neural network-based approaches for information design. The work also heavily involves human behavior which is meaningful and potentially useful. Weaknesses: Major issues: 1. In line 27, the assumption of Bayesian rationality does not appear to be a limitation of previous works. Prior research primarily focused on studying information design problems, such as equilibrium between agents and when senders can benefit from persuasion. The authors' research, however, addresses a different direction, representing a distinct problem. Therefore, the assumption of previous works is not supposed to be claimed as a "limitation". 2. This paper implies that $H$ must have encountered all possible $\pi$, for the two-stage process employed to be reasonable. Otherwise, the sender and receiver should adapt to each other iteratively in the search for equilibrium, rather than being divided into two distinct stages. This is a strong assumption because $\pi$ is continuous. But the paper does not emphasize this assumption adequately. Furthermore, treating continuous variables $\pi$ and priors as features is a simplistic neural approach that may perform worse when the state and action spaces become larger. Minor comments: 1. This paper explores information design problems in a sequential manner with multiple receivers. From the view of the settings, the receivers are set to be independent and they may have different priors, and the sender will persuade them through a public communication channel. It would be better if the authors emphasize these assumptions. Also, I notice that there are several previous works that study this kind of task, e.g., [Interactive Information Design](https://shs.hal.science/halshs-01791918/file/Publi_2.pdf) and [State-Dependent Central Bank Communication with Heterogeneous Beliefs](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3923047). 2. In line 54, the author claims to be the first to incorporate neural networks. However, we have come across some arXiv preprints that have already explored this area, such as: [Learning to Persuade] (https://openreview.net/forum?id=0oSM3TC9Z5a). 3. The example in line 110 may require more description. For example, the reward functions of both agents is not formally introduced. 4. In lines 218-223, does the author provide a more detailed description of this example? Why $\pi^*(\sigma=1\mid\theta=1)$? As far as I know, the optimal signaling scheme would be symmetric if the seeds of the experiment codes are random. 5. In line 285, why are there no known solutions for three states and three actions? Shouldn't this be solvable in a similar manner? 6. The human experiments are intriguing, and I am curious about some issues regarding these experiments. For example, during the experiments, did the human participants undergo a learning process? Did the players have any prior experience with trial runs? Were the players allowed to communicate with each other? Also, I noticed that the questionnaires provided in the appendix appear to be quite professional. How did the authors ensure that the participants understood the game? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: N/A Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful review. We first respond to your major concerns. **[The assumption of Bayesian rationality as a limitation]** While Bayesian persuasion has provided an elegant framework for studying information design, it has made several assumptions that limit the practical impacts. One of them is the assumption that the receiver is Bayesian rational. Meanwhile, extensive empirical works (e.g., Camerer 1998; Benjamin 2019) have shown that people often systematically depart from Bayes’s rule when confronted with new information. By “limitation” we mean that directly applying the techniques assuming Bayesian rational receivers when the receiver is not would lead to suboptimal outcomes. As also noted in recent work by de Clippel & Xu, JPE’22, new treatments need to be developed to account for receivers who make mistakes in probabilistic inference and decision making. We will make our descriptions more clear to reflect this. **[Assumptions on human models]** We want to first highlight that our framework is designed to accommodate various representations of human behavior. As shown in our evaluations, it can handle both standard closed-form representations of human models (e.g., Bayesian rationality or TH-model) and data-driven forms. The reviewer's concern is mostly surrounding the data-driven human models. Indeed, if our human model is entirely trained on historical data, we would require the training data to be representative. Note that even in training data-driven human models, we do not require the data to encounter “all” scenarios, as the goal of ML is to generalize beyond the training data. We also want to note that this concern can be mitigated by leveraging the recent developments at the intersection of cognitive science and ML that have shown we can utilize established human models (like Bayesian rationality or the TH-model) as priors and harness data to refine these models [Bourgin et al. ICML 2019, Peterson et al. Science 2021]. The main benefit of this approach is that it provides a reasonable model to begin with even when we do not have enough data and could lead to a more accurate model with more data. Our framework can seamlessly integrate with human models developed using such methodologies. - Bourgin et al. "Cognitive model priors for predicting human decisions." ICML. 2019. - Peterson et al. "Using large-scale experiments and machine learning to discover theories of human decision-making." Science. 2021. We now respond to your minor concerns. **[Multiple receivers]** There seems to be misunderstandings of our settings. In the multi-receiver setting that we are addressing, our goal is to design a single information policy for multiple (potentially heterogeneous) receivers simultaneously. We do not interact with each receiver in a sequential manner. Our multi-receiver setting is known to be public persuasion [60]. We will provide clarification of our settings and also include discussion to the additional related work as the reviewer suggested. **[Related work]** Thanks for the pointer! We will include the work in our revisions. We didn’t notice this paper since it doesn't seem to be published yet and doesn’t have an public arXiv version (albeit it has an openreview record). Our work is generally a more comprehensive investigation of automated information design and has addressed general forms of human behavior. **[Explain the optimal information policy of binary actions/states]** In this example, we adopt the most classical setting with binary states/actions introduced in the original Bayesian persuasion paper [32]. It’s our oversight that we didn’t specify all the details. In particular, the receiver utility $u^R(0,1)=u^R(1,0) = 0$, and we randomly draw values from $[0,1]$ for $u^R(0,0)$ and $u^R(1,1)$. In plain words, in this setting, the receiver prefers action 1 when the state is 1 and action 0 when the state is 0, and the goal of the sender is to persuade the receiver to take action 1. We will fix the description and provide a full description. **[No known solutions for non-Bayeisan-rational receivers]** To the best of our knowledge, the studies on deriving optimal information policies when the receiver is not Bayesian rational is limited. The solution for the setting with binary action/states is from Tang and Ho [56]. However, their approach analytically derives a closed-form solution for the optimal information policy for binary actions/states, and their derivation doesn’t seem trivial to extend to multi-state and multi-action settings. **[Clarifications of human experiments]** The participants cannot discuss with each other, and each individual is allowed to participate in the study only once. During the human experiments, participants were provided with instructions and an illustrative example of the problem instance to help them understand the task setting. We also offered incentives in the form of bonus payments for those who correctly inferred the latent state and made the right decision. However, despite these measures, we cannot guarantee that every participant fully understands the task or puts in a reasonable effort. Nonetheless, this is consistent with many real-world human decision-making scenarios. Furthermore, as mentioned in our response to Reviewer 7rN5, our framework is effective as long as the human behavior used to train the neural network (NN) for the information policy comes from the same distribution as the human behavior observed during deployment. Our experiments aim to meet this condition by recruiting first-time participants from the MTurk platform for both phases. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: I thank the authors for providing a response. I would focus on the two main issues for the discussion. I believe that the rationality assumption is rather general, and the authors' assumption is instead limited. My argument is as follows: For computational problems, rationality assumption is the most general setting that succinctly describes a fully utilitarian agent. The results obtained from the rationality assumption could be adjusted to non-rational settings. On the contrary, the results obtained from specific treatments are good for these treatment settings only, which could only represent a subgroup of humans. I did inspect the code the authors provided during the review. The signaling function $H$ takes $\pi$ as one of the inputs. In simple games this $\pi$ could be represented by a vector or some similar objects. In this case a neural network $H$ could definitely generalize the vector to unseen vectors. In complicated tasks, where $\pi$ is, say, a neural network as well, I don't see how $H$ takes a neural network as an input and also generalizes that to unseen $\pi$ neural networks (not until the authors provide an implementation and experiments for that). My rating would remain the same primarily for the second issue. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the follow-up questions! **[Rationality assumption]** We want to first clarify that our framework is intended to incorporate a general set of human models, including the one with standard rationality assumption. As shown in our simulation and experiments, our approach can work with the standard rational models, the models from behavioral economics (e.g., TH-Model), and even models learned from data. That is, we do not assume a new human model and work with that particular model. Instead, we are proposing a new framework that can work with a more general set of human models (including the standard one). In this sense, we believe our work is a generalization of standard approaches that work with only rationality assumptions. We agree with the reviewer that the rationality assumption succinctly describes a fully utilitarian agent, and it provides an elegant formulation for us to develop mechanisms and algorithms with human participation. In the meantime, we also want to highlight that this rationality assumption has often been criticized of not being able to accurately describe human behavior. The research field of behavioral economics has been dedicated to address this criticism. Moreover, there is a growing effort in combining ML and cognitive science to provide more accurate and interpretable human models that go beyond the standard rationality assumption (e.g., see Bourgin et al. ICML 2019 and Peterson et al. Science 2021 in our rebuttal). Our work provides a framework to leverage these new developments of human models into the design of mechanisms in the context of information design. **[Scalability concern of human models]** The first point we want to mention is that our approach works with both close-form and data-driven forms of human models. As shown in our human-subject experiments in Figure 4, if we have a reasonably accurate close-form representation of humans (e.g., TH-Model), we can achieve a reasonably good performance even without using data-driven human models. The reviewer’s concern is mainly associated with the case when we aim to utilize data-driven forms. We now address the reviewer’s concern about having $\pi$ as the input in training the human model $H$ with a data-driven approach. Recall that $\pi(\sigma|\theta)$ is a conditional distribution specifying the probability of sending a signal $\sigma$ given a realized state $\theta$. In cases when the space of signals and states is discrete, each information policy $\pi$ can be represented as a table (i.e., it can be represented as a vector), with each cell specifying the conditional probability. When training $H$, we only need to specify the input $\pi$ in this tabular representation. One note we want to clarify is that, in our HAIDNet framework, we have optimized a neural network for generating policies. This neural network encodes information policies for all problem instances within the given setting, i.e., it takes a problem instance (i.e., realization of priors, utilities) as input and outputs an information policy for that problem instance in the above tabular form. When we are training $H$, our input is the problem instance with an information policy. We do not need to feed the entire neural network as the input, we only need to provide one information policy as the input. So a tabular representation for $\pi$ is sufficient in training $H$ in our framework. The above discussion is under the settings with discrete states/signals. When we want to expand the setting to continuous setting (also raised by reviewers AWce and 5W54, and we have provided responses to them as well), it creates additional challenges. We should note that this challenge is not unique to our approach; it also exists in the traditional information design literature. One potential method to address this is through discretization, though it generally requires some smoothness assumption (e.g., small deviation of states wouldn’t lead to significant different outcomes) to ensure the discretization error can be bounded. Moreover, as mentioned in our rebuttal, the recent research of combining closed-form representations with data-driven approaches in developing human models could also mitigate this concern. Please let us know if you have additional questions. --- Reply to Comment 1.1.2: Title: Follow up to the Reviewer Comment: We would like to thank the reviewer again for the comments. We believe we have addressed the reviewer’s concerns, including the major one: >“In complicated tasks, where $\pi$ is, perhaps, a neural network, I don't understand how $H$ accepts a neural network as an input and also generalizes that to unseen $\pi$ neural networks (not until the authors provide an implementation and experiments for that)” We believe the reviewer has a misunderstanding of our setting. As mentioned in our previous response, within our framework, $H$ takes instances of $\pi$ as inputs. These instances can be represented as vectors, even in complex tasks. Therefore, there is no need to input an entire neural network when training $H$, and our current implementation already takes care of encoding instances of $\pi$ as vectors. We hope this clarification addresses the reviewer's concern. Please let us know if there are any additional questions.
Summary: Information design is the problem in which a sender would like to optimize the information sent to a receiver, such that the receiver takes actions that the sender likes. As a simple example, a company may want to optimize what information they communicate about their product, to get consumers to buy the product. The typical Bayesian-rational model assumes that the receiver knows the policy by which the sender chooses what information to communicate, performs an appropriate Bayesian update, and then chooses an action that maximizes their expected utility (which can be different from the sender’s utility). The authors consider a more general setting in which the receiver is given the same information, but no constraints are placed on how they act on this information. In general, information design problems are very difficult to solve (one result shows that a particular setting is #P-hard). The authors suggest that we instead solve information design problems using deep learning: in particular, we train a neural network (HAIDNet) to take in a specification of an information design problem, and output a policy for information to communicate. Crucially, HAIDNet takes in a (differentiable) specification of the receiver’s policy, and so can also work with models of the receiver that are not Bayesian-rational – including a neural network trained to mimic human behavior. Experiments show that in small problems where the optimal policy is known, HAIDNet produces results that are nearly optimal (achieve a sender reward that is single-digit percentages away from optimal, well above that achieved by a random policy). HAIDNet continues to produce good-looking results in larger settings where the optimal policy is not known, though it is hard to tell how good the results are. Finally, the authors run an experiment with real humans, where they first elicit the human policy, train a neural network to imitate the humans, and then train HAIDNet to solve the information design problem with the neural network as the receiver policy. They show that HAIDNet produces a solution that, when tested with real humans, significantly outperforms the optimal solutions under the assumption of Boltzmann rationality, and non-significantly outperforms the TH-model (a model from prior literature), illustrating the benefit of using data-driven models of human behavior. Strengths: 1. By far the biggest strength of this paper is its experiment with humans. Showing strong performance with real humans is a significantly higher bar than showing good results in simulations with human stand-ins; this paper meets this higher bar. 2. The setting of information design is scientifically interesting, and to my knowledge deep learning has not been applied to it before (though I am not familiar with the literature). 3. The authors show two major benefits from using deep learning: the ability to approximate answers with arbitrary (differentiable) human behavior models, as well as the ability to compute approximate solutions much faster than with e.g. linear programming. (That being said, I do not know the classical state of the art for approximately solving information design problems.) 4. The authors’ approach is simple and natural, suggesting it would be easy to replicate. Weaknesses: 1. Scalability: Information design problems require networks to take full policies as an input to the network. This will be a challenge for large problem instances, and isn’t tackled in this paper. 2. Ethics: Encoding realistic models of human behavior into information design seems particularly likely to have negative social impact. **Scalability** Information design problems require networks to take full policies as an input to the network: in particular, the network modeling human behavior must take the full stochastic policy that the sender uses to make decisions as an input. (The HAIDNet does not need to take the full human policy as input, it just samples from the policy on appropriate inputs as needed.) This can be done when there are a small number of possible states and possible signals to send, as in the simple experiments done in this paper. However, in realistic settings such as recommender engines deciding which ratings to display, or politicians deciding how to design policy experiments, there will be a huge number of possible states and signals, which cannot even be enumerated. It’s not clear how HAIDNet would scale to these situations. (That being said, it does not look impossible, or even necessarily very hard. Ultimately there is a bi-level optimization problem, and while those are hard to solve, there are many techniques that get traction on the problem.) I do think the contribution of the paper is significantly more limited due to the unscalable nature of the current architecture. **Ethics** I am worried that this research will primarily contribute to negative impacts on society, and have flagged it for ethics review below. If I were making the decision based only on the information I currently have, I would not publish the paper and would encourage the authors to set aside this research area and move on to other work. However, this is not one of the areas mentioned in the NeurIPS code of ethics, so I am not incorporating this into my evaluation for the paper, which I will base just on the scientific merits of the paper. I will leave it to the area chair / ethics reviewers to decide how to incorporate ethical considerations into the decision. The authors note that typically in information design, the sender is the party in power, while the receiver is not, and so optimizing for the sender’s utility can result in using a sender’s informational advantage to exploit the receivers. I agree with this, but in addition I think moving from a Bayesian-rational model to a data-driven human behavioral model makes the problem much worse For a Bayesian-rational receiver, we have the following property: the receiver’s beliefs over the underlying state are well-calibrated (they follow from an appropriate Bayesian update that incorporates how the information was generated). Relative to getting no signal at all from the sender, this can never harm the receiver (in expectation), since for a rational Bayesian agent the value of information is always positive. This bounds the amount of harm that can be done from information design. (It is of course still possible that the receiver does much worse than if the sender simply provided the information most useful to the receiver, which we would count as a harm in many cases, e.g. the sender may probabilistically choose not to reveal that a more expensive product is actually lower quality, so that the receiver buys the more expensive product some of the time; we would plausibly consider this unethical since the sender could have provided better quality information allowing the receiver to make a more informed decision.) In reality, human receivers will not be Bayesian-rational. However, a sender policy that was chosen based on a Bayesian-rational receiver model is still going to produce actions whose intended effects are to give the receiver true information. In particular, it will _not_ choose actions based on their propensity to mislead or trick the receiver, because it is impossible to trick the Bayesian-rational receiver. However, once we move to data-driven models of human behavior, we lose these guarantees. A data-driven model of human behavior will likely also include many human biases, and so a sender policy optimized against such a data-driven model will learn to exploit these biases. For example, it may: 1. Appeal to irrational fears (e.g. selling products by making some risks emotionally salient, even if the risks are tiny in practice – maybe an expensive shark repellant to ward off shark attacks) 2. Understand which claims humans can check (i.e. which aspects of the sender’s policy the receiver can observe and is sensitive to), and lie about any claims that can’t be easily checked 3. Identify markers of credibility, and design signals that exploit credibility marker (e.g. endorsements from celebrities or scientific experts with strong public reputations, but who give their endorsement freely to even poor quality products). I’m sure there are many more such applications, these are just for illustration. We already see some of these effects with advertising and politics, and that is with relatively unsophisticated models of human behavior. All of these effects could be significantly exacerbated if there are significantly more accurate data-driven models of human behaviors, or if these models could be personalized to specific individuals. In the case of personalized models, I would also expect to see significant inequality effects, where more elite, well-connected, and knowledgeable people will better understand how automated information design results in manipulation, and will be better able to counteract its effects, and meanwhile worse-off people will be more vulnerable to manipulation. There are of course potential positive applications as well: for example, this could be used to design more effective campaigns to improve public health (e.g. leveraging knowledge of human biases to get people to wash their hands more often). However, it seems like the negative applications are both more numerous and more likely to happen. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Do you have any plans or ideas for scaling up HAIDNet to much larger problems, where the full state space is implicit and the receiver policies cannot be enumerated in full? Can you elaborate more on the ethical implications / broader impacts of this research? (See discussion above; in particular I’m interested in a response to the point about exploiting systematic human biases.) Minor suggestions: Line 305: Mention that in the human experiment you tell the humans the prior in addition to the conditional probabilities (currently it only mentions the conditional probabilities). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: The authors have some discussion of potential negative societal impacts, but I think it is insufficient – see Weaknesses. I would also recommend that the authors briefly discuss the limitation of scalability. To get additional space, I think it would be fine to remove the discussion about how gradient descent does not find optimal solutions, and how the resulting solutions may not be robust to distributional shift: the NeurIPS audience will tend to assume these two limitations by default for any paper applying deep learning, unless the paper explicitly claims otherwise. Flag For Ethics Review: ['Ethics review needed: Discrimination / Bias / Fairness Concerns', 'Ethics review needed: Inappropriate Potential Applications & Impact (e.g., human rights concerns)'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the insightful comments! We will incorporate the clarification suggestions. Below we respond to the two major comments. **[Scalability]** In our framework, there are two separate scalability considerations: *Scalability of optimizing information policy*: We'd like to begin by noting that scalability is a recognized challenge in the traditional study of information design. A key motivation for our work is the result by Dughmi and Xu [15] that computing the optimal information policy is #P-hard. We have empirically examined the scalability of our approach in Appendix B.1.2. Specifically, as depicted in Table 5, for the more computational heavy setting with multiple receivers, the standard linear programming (LP) method of solving the information policy scales poorly, taking 0.3 seconds to solve an instance with one receiver and over 23 hours to address an instance with 17 receivers. In contrast, our approach demonstrates better empirical scalability. It takes 0.3 hours to train a neural network (NN) for one receiver and 1.12 hours for 17 receivers. It's noteworthy that the trained NN can generate information policies to all problem instances, and the testing time to generate a policy for an instance takes less than 0.352 seconds, even for cases with 17 receivers. These empirical results suggest that our method offers a much more scalable solution for designing information policies compared to traditional methods. In scenarios with continuous action/state spaces, the information design problem is generally challenging to solve, both in the traditional information design literature and in our approach. To address this challenge, one potential approach is to employ discretization. However, this approach requires some smoothness assumption (e.g., small deviation of actions wouldn’t lead to significant different outcomes) to ensure the discretization error can be bounded. We will add discussion in the main text in the revision. *Scalability of approximating human behavior*: Our framework is designed to accommodate various representations of human behavior. As shown in our evaluations, it effectively handles both the standard closed-form representations of human models (e.g., Bayesian rationality or TH-model) and data-driven forms. While we concur with the reviewer's concern regarding the scalability of training a purely data-driven human model, recent studies at the intersection of cognitive science and ML have shown that we can utilize established human models (e.g., Bayesian rationality or the TH-model) as priors and harness data to refine these models [Bourgin et al. ICML 2019, Peterson et al. Science 2021]. The main benefit of this approach is that it provides a reasonable model to begin with even when we do not have enough data and could lead to a more accurate model with more data. Our framework can seamlessly integrate with human models developed using such methodologies, mitigating the scalability concerns. - Bourgin et al. "Cognitive model priors for predicting human decisions." ICML. 2019. - Peterson et al. "Using large-scale experiments and machine learning to discover theories of human decision-making." Science. 2021. **[Ethics]** We wholeheartedly agree with the reviewer’s concern, which is the reason we brought up this point during our discussion. The information design literature, and more broadly, the principal-agent problem or Stackelberg game in economics, presents a scenario where one advantaged party might exploit its informational advantage over the disadvantaged party. As the reviewer noted, this concern could be amplified as we acquire more accurate knowledge about the disadvantaged party. That said, we still believe it is important to conduct our research. First, as also pointed out by the reviewer, this concern of exploiting irrational human behavior is ubiquitous and is already being deployed by private sectors in the domain of marketing or political campaigns. In order to develop risk mitigation methods, it is our belief that we first need to understand the capacities of this approach. Secondly, there are also benign ways of utilizing this approach, e.g., encouraging the general public to take actions promoting social good or preventing individuals from taking self-sabotaging decisions. Without a public grasp of both the potential risks and benefits, we risk leaving the utilization of these techniques to the private sectors exclusively, which, arguably, could present an even more concerning situation. We also want to discuss two potential risk mitigation methods in light of the concern raised. Firstly, on the technical front, we might employ differential privacy techniques to control the amount of personal data used in training human behavior models. Differential privacy offers a means to balance privacy with utility, typically by introducing controlled noise into the data. This mechanism is potentially helpful in mitigating the exploitation of marginalized groups, an issue raised by the ethics reviewer. Secondly, from the policy perspective, after we develop a deep understanding of the capability of information design with data-driven human models, we as a society could and should discuss the tradeoff of the utility we can gain from this method and the harm we might suffer. This tradeoff discussion can then lead to the developments of regulations and policies. For example, we could add constraints requiring that the deployed information policy shouldn’t lead to much lower receiver utility, compared with deploying information policy designed assuming Bayesian rational models. This is similar to one common approach of imposing fairness constraints in the recent fairness literature. But again, these discussions requires us to develop a good understanding of the capability of this approach in the first place We will provide more discussion on the above in the revision. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I have read the authors' rebuttal and the other reviews, and am maintaining my score of 6 (again noting that it doesn't take into account the ethics flag). ## Scalability Yes, I agree with everything you write here. I was critiquing the scalability to larger games in particular (whether continuous state/actions, or discrete states/actions with a large number of states + actions). I agree that the method can already take advantage of other advances in human modeling, and also agree that the method already shows significant advantages relative to exact solvers. ## Ethics While I agree that it is helpful to understand a problem in order to fix it, from this perspective I think the natural thing to do is to investigate the existing techniques that people use, and attempt to design risk mitigation methods. I would not be designing novel techniques: if these techniques are already used then my time would be wasted reinventing the wheel; if the techniques are not used then I have just increased capacity for harm before I’ve even started to design mitigations. I agree there are also beneficial uses, but my expectation is that the harmful uses are more common and more impactful than the beneficial ones (though of course this is based on guesswork, and would benefit from a more thorough investigation). If the beneficial uses significantly outweighed the harmful uses I would withdraw my objection.
Summary: The paper focuses on the problem of automated information design, where the sender strategically reveals information to persuade the receiver to take specific actions. The main contribution of this paper is addressing the challenge of modeling human behavior when individuals do not act as Bayesian rational agents. The problem is formalized as an optimization problem, and the authors propose a neural network architecture (HAIDNet) to solve this optimization problem. The effectiveness of their approach is evaluated through experiments conducted in both simulation and real world settings. In their simulations, they show that HAIDNet can solve settings _without_ efficient solutions, including multiple receivers and non-Bayesian receivers. They then fit a neural network model to model how human subjects update their preferences in practice, and show that HAIDNet significantly improves on baselines in convincing human subjects to change their behavior. Strengths: * The method proposed in the paper is intuitive and make sense. * The writing is clear and effectively communicates the ideas, with Figure 1 providing a helpful visual illustration of the concepts. * The experimental analysis is extensive and noteworthy, particularly with the inclusion of results with real human subjects. This adds credibility to the findings. Weaknesses: * It is unclear how scalable this approach is. To fit the neural network, a large amount of data needs to be collected, and such data is often scarce. For example, in their experiments, they used 2000 worker-rounds of product purchasing decisions to fit a small 3-layer neural network on a binary tasks , more realistic human models * The proposed method requires differentiable human models, which can be hard to come by in practice, especially in cases where the human models are e.g. designed by hand or feature random components. * The learned human models and information policies are hard to interpret, which is compounded by the issue of scarce data. It's not clear what exactly the small neural networks have learned in their experiments, and the authors do not discuss this in their paper. In practice, it seems important to Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I'd be interested in hearing more about the relationship between this work and Reddy, Levine, and Dragan 2021*. In that work, they also fit a neural network policy (albeit a larger, recurrent one) to synthesize observations to influence the actions of a human users. Similarly, I think there's some amount of related work in the human-assistance literature in general that feature similar algorithms (where a blackbox neural network is used to learn a policy that performs well according to a utility function with human subjects). 2. I'm somewhat puzzled about why you call it an architecture, when HAIDNet seems to me to be more of an algorithm for training arbitrary neural network architectures. In my case, I was a bit confused by this, and expected a special neural network architecture more directly designed for modeling information policies, but instead the paper uses 3-layer MLPs. I'd appreciate if this was made more clear in the introduction and in section 3.2. 3. Similarly, I would appreciate more discussion on the core assumptions of the HAIDNet framework. What do you think are the crucial parts of the algorithm related to the key insights, and which parts are contingent on implementation details? For example, is it important that you optimize the policy with SGD instead of an RL algorithm like PPO or DDPG? Is it important that the neural network is a 3-layer MLP? \* Reddy, S., Levine, S. and Dragan, A., 2021, October. Assisted perception: optimizing observations to communicate state. In Conference on Robot Learning (pp. 748-764). PMLR. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors describe the key limitations of HAIDNet as 1) possibly finding only locally optimal solutions as it uses Adam (a local first-order optimizer) to update the information policy and 2) only getting good performance _in expectation_, as opposed to uniformly across all cases. However, I think in comparison to these two limitations, the scalability, differentiability, and interpretability issues described in the weaknesses section above are more important for using their method in practice. **(Negative) Social impacts** The authors correctly note that their method (as with most other work in this space) can be used for disinformation or user manipulation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the valuable feedback and comments! **[Scalability]** Please see our responses to Reviewer 5W54. **[Differentiable human models]** While the standard rationality model is not differentiable, many of the other models are, e.g., discrete choice models and data-driven models. The differentiable assumption essentially requires human behavior to be smooth, which is often approximately satisfied if we consider the stochasticity of human behavior or focus on dealing with a population of humans. In cases when the human model is not differentiable, we can utilize standard approaches, e.g., softmax relaxations, to approximate human behavior. We will add discussion in the revision. **[Interpretability]** For the interpretability of information policy, we would like to first note that this is an open question even in the standard information design literature. While in some settings, e.g., when the receiver’s optimal action depends only on the expected state, an interpretable optimal information policy can be characterized with certain mild assumptions (see, e.g., Arieli et al EC’20; Kolotilin et al, Theoretical Economics'22), to the best of our knowledge, there are no existing characterizations on interpretable information policy for general settings. In our work, to increase interpretability, one potential approach is to add constraints in the sender’s optimization problem to force the information policy to satisfy certain interpretability conditions. We will add more discussions on this point in the revision. For the interpretability of human models, we want to first highlight that our framework is designed to accommodate various representations of human behavior, including the standard closed-form representations of human models (e.g., Bayesian rationality or TH-model) and data-driven forms. While the former is interpretable, the latter is not in general. Developing an accurate and interpretable human model is actually an open and ongoing research question at the intersection of cognitive science and ML (e.g., Bourgin et al. ICML 2019 and Peterson et al. Science 2021). Our framework can seamlessly integrate with human models developed using such methodologies, mitigating the interpretability concerns. - Bourgin et al. "Cognitive model priors for predicting human decisions." ICML. 2019. - Peterson et al. "Using large-scale experiments and machine learning to discover theories of human decision-making." Science. 2021. **[Comparison to prior work]** As discussed in the expanded related work in the appendix, on a conceptual level, our work is related to the growing attention in understanding, modeling, and accounting for human behavior in computation systems, especially in the context of human-robot or human-AI interactions. We have cited several works from the same lab of the work mentioned by the reviewer on human-robot interactions, and we will include this one as well. More specifically, our works are similar in the sense of aiming to develop realistic human models and design AI, ML, or the environment to work with humans. However, depending on the problem setting, the format of interactions and objectives are different for different works in this literature. These differences result different problem formulations and challenges, including the need to abstract proper forms of human bebavior, identifying the objectives to be optimized with data-driven approaches, etc. **[Architecture]** We called HAIDNet an architecture to highlight that we have included human models in the design. However, we agree with the reviewer that it might confuse readers with the architecture of neural networks. We will call it a framework in our revision. **[Assumptions]** Thanks for the suggestions. We will more explicitly list the assumptions. In particular, we require two assumptions: (1) the behavior of humans used for training HAIDNet is draw from the same distribution of the human behavior during deployment (i.e., the standard training/test assumption in ML), and (2) human models are differentiable (this assumption is soft, as it can be approximated using soft-max relaxations if it’s not). **[Importance of implementation details]** Our goal is to show that a data-driven approach can be used to find a near-optimal information policy for general human receiver models. Our investigation has adopted the most standard setup: Fully-connected MLP + SGD, and this setup already shows promise in improving the outcomes. We did not examine the other architecture and approaches but agree it would be interesting future work. **[Expanding discussion]** Thanks for the suggestions. We agree with the reviewer’s comment and will discuss the scalability, differentiability, and interpretability in our revision. --- Rebuttal Comment 1.1: Title: Thanks for the response! Comment: Thanks to the authors for the response, especially with respect to my concerns on scalability (included in the response to reviewer 5W54) and interpretability. I've increased my score to a 6 in response.
Summary: This paper proposes a neural network based framework for automated information design that can optimise both the senders and the receivers behaviour. In situations where the receivers behaviour cannot be approximated analytically, a neural network that learns the preferences from existing data can be utilised. In all situations the sender's behaviour is optimised via minimising a differentiable loss. In experiments they show that this framework can closely approximate the optimal solution in smaller scale problems, and can also be utilised with real human participants where the receivers behaviour must be learnt, and outperforms the less data-driven baselines. Strengths: As somebody not familiar with the field of automated information design, I still enjoyed reading the paper. I liked the presentation of the motivations and the preliminaries on Bayesian persuasion. Relaxing the assumption of Bayesian rationality and demonstrating that a more data-driven approach (backed by ML) can be utilised is hugely useful, especially since it is demonstrated on a study with real human participants. That a learning based approach (utilising a small MLP) can be trained significantly quicker and still recover good approximations of optimal solutions is a big plus for the method (provided the practitioner has the appropriate hardware to train a neural network fast). Weaknesses: (Again not very familiar with the field) I am struggling a little bit to understand the exact contributions of this paper. Proposing to use a neural network and that machinery to solve Eqn 2 doesn't seem to be novel (as mentioned in the paper lines 168-169). There does seem to be in attempting to learn human behaviour in these scenarios from data with a neural network. I am a little unsure as to whether there have been ML (specifically neural network based) approaches to this problem in this field. Nonetheless, since the ML setup+framing is such a crucial part of the proposed method, the paper would benefit from a more in depth description and discussion of the different components. 1. What exactly is the input to the neural network, what size is it (it looks to be a table that is presumably flattened?), how does that scale wrt problem instances. 2. What architectures were considered/studied for this particular problem. 3. How can this approach scale when a tabular representation doesn't suffice. Related to the previous point, from my understanding of the paper a big contribution is that human behaviour in these tasks is much better approximated with a learning based approach. Thus, how exactly that module is setup is very important and should be discussed in more detail. In particular, how exactly the sender's information is policy is represented (and how this might or might not scale). In addition, the human's behaviour is approximated using existing data. However, as soon as the sender's policy changes to something that has not already been observed then we are reliant on the generalisation of the neural network to accurately capture the human's behaviour. How well this part of the framework generalises to the optimised sender policies should also be looked at. I like Figure 3, testing the generalisation. However, I would like a more absolute measure of the performance as well. Does the best performing HAIDNet policy actually perform well on the task? Additionall,y I would like more discussion on these results. Particularly for cases like $\beta_H = 100$, where it seems to 'generalise' much better than say $\beta_H=1$. What is the difference in the learned policies? Why for $\beta_H=\infty$ does the performance dip around 20? I like Table 4 a lot, some more discussion on it would be good to really drive home the differences between the Bayesian rational assumption and real observed human behaviour. Also a comparision between the NN and TH-Model to try and identify what about the TH-Model is preventing it from learning a better solution. Also, where does the neural network make errors? It's performance is indeed higher than the others, but there is still quite a bit of headroom. Also for Figure 4, a more qualitative discussion on the resulting policies learned by the methods. Minor: > "HAIDNet essentially recovers the optimal information policy in almost all scenarios" Would be nice to quantify this a bit more, since there do look to be small differences in Figure 2. You mention a boosting and aggregation approach is taken, please described this in more detail. Is the information in C.1 required/does it need to be reported in the paper (Race/Ethnicity in particular)? Technical Quality: 3 good Clarity: 3 good Questions for Authors: (I have put my important questions in the section above - apologies) Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, limitations and potential negative societal implications have been discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful review. **[Contributions]** We first highlight our contributions. Firstly, we introduced a data-driven optimization framework for information design. Note that even in standard settings where humans are Bayesian rational, designing optimal information policy is known to be #P hard. Our proposed approach can more efficiently identify near-optimal information policies compared with standard techniques. Secondly, our approach can accommodate different representations of human behavior beyond standard assumption of Bayesian rationality. This flexibility also makes it straightforward to incorporate recent research trends that combine domain knowledge with data-driven models for human behavior. Lastly, we have validated our methods through human-subject experiments, showcasing the practical viability of our approaches. Below we respond to your other comments/questions. We will incorporate your suggestions and the following clarifications in the revision. **[Scalability]** Please see our responses to Reviewer 5W54. **[Network input and continuous input space]** The input to the neural network (NN) specifies the problem's specifications: the prior distribution, sender utility, and receiver utility. For instance, in a single-receiver setting with $M$ states and $N$ actions, the input size is given by $M * (2N+1)$. In scenarios with continuous action/state spaces (i.e., the input can't be tabulated), the information design problem is generally challenging to solve, both in the traditional information design literature and in our approach. To address this challenge, one common approach is to employ discretization. However, this approach requires some smoothness assumption (e.g., small deviation of actions wouldn’t lead to significant different outcomes) to ensure the discretization error can be bounded. **[Network architecture]** As described in Section 3.2 and Appendix B.3, we utilized a 3-layer fully connected neural network with ReLU activation functions. We want to emphasize that the objective of our study is to demonstrate the viability of a data-driven approach for automated information design. We are not positing that our chosen architecture is the best possible. Nevertheless, even with this relatively standard choice, our results have already been promising. **[Receiver modeling without representative data]** We want to highlight that our framework is designed to accommodate various representations of human behavior. As shown in our evaluations, it can handle both standard closed-form representations of human models (e.g., Bayesian rationality or TH-model) and data-driven forms (e.g., neural networks). The reviewer's concern regarding the challenge of developing human models when data is not representative (e.g., encountering situations not in the historical data) pertains specifically to data-driven human models. Indeed, if our human model is entirely data-driven, we would require the training data to be representative to ensure generalizability, as is commonly required in supervised learning. That said, the reviewer’s concern can be mitigated by incorporating the recent studies in utilizing existing close-form human models as priors and leverage behavioral data to improve the models [Bourgin et al. ICML 2019, Peterson et al. Science 2021]. The main benefit of this approach is that in cases when the data is insufficient, the model behavior would fall back to existing human models. Our framework can seamlessly integrate with human models developed using such methodologies. - Bourgin et al. "Cognitive model priors for predicting human decisions." ICML. 2019. - Peterson et al. "Using large-scale experiments and machine learning to discover theories of human decision-making." Science. 2021. **[Explain Figure 3]** The parameter $\beta_H$​ in this human model represents the stochasticity level of human behavior. A smaller value of $\beta_H$​ indicates more random human behavior. Consequently, human models with $\beta_H=1$ depict behavior that is close to random. As demonstrated in the left-most column of Figure 3, in such scenarios, all information policies lead to similar outcomes. Conversely, when $\beta_H$ approaches infinity, human behavior becomes deterministic. In these instances, an accurate understanding of human behavior is essential for effective performance, as illustrated in the rightmost column of Figure 3. To explain the relative performance dips observed around $\beta_H=20$ for policies trained on the human model with $\beta_H=\infty$: For smaller values of $\beta_H$, the performance across different policies tends to be similar. However, for considerably large values of $\beta_H$, a $\beta_H$ value of infinity serves as a reasonable approximation. Thus, the most significant relative performance gap for a policy trained on $\beta_H=\infty$ occurs with moderate values of $\beta_H$. **[Test accuracy of human models]** In our human-subject experiments, we are learning a model to predict the behavior for a population of humans. Together with the intrinsic stochasticity of human behavior, it is not possible to reach perfect accuracy in predicting human behavior. In fact, Tang and Ho [56] have estimated that close to 20% of worker behavior in a similar setting is random. **[Optimization procedure]** The boosting procedure is only used in settings with Bayesian rational receivers. Since this human model is not differentiable, we adopt the common practice of using differentiable softmax function to approximate its behavior, which leads to higher errors. We address this by taking a boosting approach of iteratively training networks, reweighting data distributions, and aggregate learned neural networks to reduce the error. **[Worker demographics]** It is a common practice to report the demographics of the user studies, for transparency and generalizability reasons. --- Rebuttal Comment 1.1: Title: Thanks for your reply Comment: Reading the other reviews and your replies, most of my concerns have been somewhat alleviated and I will raise my score. Please do incorporate some of your replies into the main paper, particularly that the focus is not on a particular neural network architecture, and that the particular architecture and input representation you used would struggle to scale to larger and more complex problem domains (and would this be an avenue for future work). --- Reply to Comment 1.1.1: Comment: Thanks! Yes, we will definitely incorporate the discussion in our revisions. We appreciate all the feedback that helps us improve the paper.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Depth-discriminative Metric Learning for Monocular 3D Object Detection
Accept (poster)
Summary: This work focuses on the monocular 3D object detection task. As many works indicated, depth estimation is the bottleneck of this task, and the authors propose the apply metric learning to improve the accuracy of the depth estimation sub-task. The proposed metric-learning-based loss encourages the model to extract depth-discriminative features regardless of the visual attributes without increasing inference time and model size. Extensive experiments on the KITTI3D and Waymo Open datasets demonstrate the effectiveness of the proposed loss fuction. Strengths: 1. I like the idea that applying metric learning to improve the accuracy of depth estimation. Theoretically, this method has a good generalization ability and is easy to be embedded in other baseline models. 2. Extensive experiments and good performance. The authors conduct lots of experiments on the KITTI3D and the large-scale Waymo Open datasets. Besides, they test their metric learning-based loss function on several baseline models, and the improvements across several settings demonstrate the effectiveness of their proposed model. 3. The main idea is easy to follow and this paper is well-organized. This paper has many mathematical details that scholars in this field may not be familiar with, but it still presents them clearly and systematically. Weaknesses: 1. In lines 68-69, the authors claim their proposed work is 'the first approach that applies metric learning to monocular 3D object detection.' In fact, there is another work [1] that discussed how to apply metric learning in monocular 3D object detection with a focus on dimension estimation. This claim should be modified or removed. 2. Although this work provides lots of experiments to show the effectiveness of their method, lots of them are based on the same foundation model, i.e. CenterNet (and one of them is based on the transformer-based pipeline). However, there are still some other popular detection pipelines such as the BEV paradigm. It will be better to validate the effectiveness of more pipelines. 3. This work conducts experiments on the large-scale Waymo dataset, which is great. However, I still want to see the performance on the nuScenes dataset, because the nuScenes benchmark is dominated by another detection pipeline and adopts different evaluation metrics. Evaluating on this dataset can further show the effectiveness and generalization ability of the proposed method. Overall, this paper proposes a novel and effective metric-learning-based loss function. I tend to accept this work and I can further improve my rating if the authors can show the generalization ability of this work in more settings. [1] Dimension Embeddings for Monocular 3D Object Detection, Zhang et al., CVPR'22 Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >**Q1.** In lines 68-69, the authors claim their proposed work is 'the first approach that applies metric learning to monocular 3D object detection.' In fact, there is another work that discussed how to apply metric learning in monocular 3D object detection with a focus on dimension estimation. This claim should be modified or removed. **A1.** We were not previously aware of the cited work. We appreciate your bringing it to our attention and will make the necessary revisions accordingly. ___ >**Q2.** Although this work provides lots of experiments to show the effectiveness of their method, lots of them are based on the same foundation model, *i.e.* CenterNet (and one of them is based on the transformer-based pipeline). However, there are still some other popular detection pipelines such as the BEV paradigm. It will be better to validate the effectiveness of more pipelines. **A2.** We greatly appreciate your insightful feedback. Following your request, we conducted an additional experiment on ImVoxelNet [9] that adopts the BEV paradigm using the 3D voxel feature. This baseline reformulated the outdoor 3D object detection as 2D detection in the BEV plane. We thus extracted the object feature from the projected BEV feature prior to the head that consists of two parallel 2D convolution layers. Results, as shown in `R-Table 3`, show that while the performance gain isn't as pronounced as with CenterNet-based methods, the proposed $\cal{L_{qi}}$ consistently improves performance across all metrics, resulting in an overall performance boost of $+6.6$%. These findings, as highlighted in the main paper, underscores the broad utility of $\cal{L_{qi}}$ in 3D object detection, provided object depth labels are available. ___ >**Q3.** This work conducts experiments on the large-scale Waymo dataset, which is great. However, I still want to see the performance on the nuScenes dataset, because the nuScenes benchmark is dominated by another detection pipeline and adopts different evaluation metrics. Evaluating this dataset can further show the effectiveness and generalization ability of the proposed method. **A3.** In response to your suggestion, we have now expanded our evaluation to include the nuScenes dataset [4], as presented in `R-Table 4`. The proposed method achieved a $+2.8$% improvement in MAE, and a notable $+12.3$% enhancement in Car AP, resulting in an overall performance boost of $+7.6$%. These consistent gains across all datasets underline the robustness and generalization ability. ___ --- Rebuttal Comment 1.1: Title: Final Rating Comment: I appreciate the additional experiments which further confirm the effectiveness and generalization ability of the proposed method. Based on the solid theoretical analysis and good results, I'd like to change my score to 'strong accept'.
Summary: This paper introduces a novel metric learning scheme for extracting more depth-discriminative features in the monocular 3D object detection task. A distance-preserving function is adopted to build the relation between feature space and the ground-truth object depth. The authors propose a quasi-isometric loss to adjust the distances among object descriptors. Furthermore, an auxiliary head for object-wise depth estimation is used during training, enhancing depth quality while maintaining the inference time. The experiments show that the proposed method can improve the performance significantly of various baselines on KITTI and Waymo datasets. Strengths: - This paper introduces a metric learning method to improve the discrimination of object descriptors according to their depth information. The scheme is simple yet effective, which maintains the geodesic distance of depth information in feature space. The quasi-isometry under the local distance-preserving condition mitigates the negative damage to the non-linearity of the natural manifold. It minimizes the influence on other tasks. - The quasi-isometric loss is designed to arrange condition-violated samples. The samples far from the quasi-isometric distance are pulled closer. The samples too close in feature space are pushed away. The loss focuses on hard samples which avoids many unnecessary computations while ensuring the isometric objective. - The proposed method is plug and play. It can easily integrate into various monocular 3D object detection baselines and brings significant improvement. Weaknesses: - I am curious about the efficiency of the proposed method during training. The calculations of relative distances between objects in both original and feature space would be enormous. Besides, the complexity of calculating the distance matrix grows with the square of the number of objects. Therefore, the proposed method may be not suitable for the scenario with a large number of objects during training. The paper does not discuss this problem. - The object-wise depth prediction head is trained for estimating more accurate depth, avoiding significant errors that existed in the center depth prediction head. Why not directly use the prediction of object-wise depth head for a more accurate depth estimation during inference? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors do not address the limitations of their work. This method may be limited by the number of objects. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >**Q1.** I am curious about the efficiency of the proposed method during training. The calculations of relative distances between objects in both original and feature space would be enormous. Besides, the complexity of calculating the distance matrix grows with the square of the number of objects. Therefore, the proposed method may be not suitable for the scenario with a large number of objects during training. The paper does not discuss this problem. **A1.** As you pointed out, the time/memory complexity of calculating the distance matrix increases quadratically with the number of objects. Nevertheless, we enhanced the efficiency of our quasi-isometric method at the implementation level by loop unrolling as detailed in the algorithm table of supplementary materials. Instead of computing each distance pair-by-pair, we substituted these operations as matrix operations, which are GPU-friendly operations. Following your request, we empirically measure the training cost of our quasi-isometric loss. We compared the training time of one epoch between the 'MonoCon' [6] and 'MonoCon+Ours' using a single NVIDIA Titan RTX. We simulated the most challenging scenario by setting the maximum number of property-violated object pairs to 30 per image, given that MonoCon can infer up to 30 objects. Method|Avg. training time (sample/ms)|Single epoch time (s) ---|---|--- MonoCon|$84.9$|$315$ MonoCon+Ours (worst case)|$87.1~(+2.59$%$)$|$324~(+2.86$%$)$ The results were as follows: 'MonoCon' required 315 seconds for one epoch, whereas 'MonoCon+Ours' took 324 seconds. This represents a mere $2.86$% increase in training time, which we believe has a negligible impact on overall computational efficiency. ___ >**Q2.** The object-wise depth prediction head is trained for estimating more accurate depth, avoiding significant errors that existed in the center depth prediction head. Why not directly use the prediction of object-wise depth head for a more accurate depth estimation during inference? **A2**. When utilizing object-wise depth maps for depth prediction, issues such as occlusion arise. For instance, if the estimated center point is obscured by the 2D bounding box of a foreground car, the system might mistakenly extract the depth information of the foreground vehicle. This can lead to a decline in performance. Experimental results confirm this: 'MonoCon+Ours' achieved a car class mAP (mean average precision) of $19.43$ (moderate), whereas 'MonoCon+Ours using object-wise depth map for prediction depth' only managed a car class mAP of $15.17$ (moderate). ___ --- Rebuttal Comment 1.1: Title: The reply addresses my concerns Comment: Thanks for your reply. The feedback has addressed my concerns. After reading other reviews and the rebuttal materials, I lean to acccept this paper.
Summary: One main challenge of monocular 3D object detection models is the lack of depth information from RGB images. The authors proposed a metric learning scheme to encourage the model to extract depth-discriminative features. Based on the presented theoretical results, the authors proposed a quasi-isometric loss and an object-wise depth map loss to supervise the model. Quantitative results on benchmark datasets supported the main arguments of the paper and sufficient ablation study experiments were conducted. Strengths: 1. The motivation and problem setting are explained clearly. References were provided to demonstrate why depth is key to improve current monocular 3D methods and how depth-discriminative features would help. 2. The proposed quasi-isometric loss seems novel and effective. To prevent the depth-discriminative loss from damaging the non-linearity of the natural manifold, the authors adopted a distance-preserving condition. Despite extra hyper-parameters introduced, the proposed method is effective in general under various settings. 3. Quantitative results on two benchmark datasets using multiple baseline models demonstrated the effectiveness of the method. Results showed that the proposed depth-discriminative is an effective approach to assist monocular 3D methods. Weaknesses: 1. The high-level idea of the proposed approach resembles previous contrastive learning approaches [1]. I could imagine adding depth-based feature contrastive losses to the baseline loss. Would that work? What would be the advantage of the proposed approach compared to contrastive losses? 2. The authors claimed that directly learning quasi-isometry would hurt sub-tasks. Despite such trade-offs being common in deep learning, I think this problem is a bit understudied. For instance, object features from objects that differ a lot in distance should be easily discriminated against. Would larger $K$ help with a larger $\epsilon$? Or is this design related to hard negative sampling? 3. Figure 2 seems interesting. However, I assume the pairs are also defined by the given $K, B, \epsilon$, which makes it less convincing. References: 1. D. Neven et al. Towards End-to-End Lane Detection: an Instance Segmentation Approach. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What is “Ours” in supplementary Table 3? Only $\mathcal{L}\_{qi}$ or both $\mathcal{L}\_{qi}$ and $\mathcal{L}\_{obj}$? I assume when ablating $K, B, \epsilon$, you should not involve $\mathcal{L}\_{obj}$ in the comparison? 2. Since there is an auxiliary head for depth estimation, is the estimated depth more accurate with the quasi-isometric loss? It would be good to compare the trade-off between depth estimation and “other sub-tasks” when $\epsilon$ changes. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. The work can be improved by comparing with methods with similar ideas (see weakness 1). 2. It is a bit unclear how different feature spaces look like under different hyper-parameter settings. It would be good if more analysis are presented (see weakness 2). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >**Q1.** The high-level idea of the proposed approach resembles previous contrastive learning approaches. I could imagine adding depth-based feature contrastive losses to the baseline loss. Would that work? What would be the advantage of the proposed approach compared to contrastive losses? **A1.** In our main paper, we excluded results from conventional metric learning methods like SimCLR [1] and SupCon [2] since they are primarily tailored for classification tasks. Instead, we chose to compare ours with SupCR [3], which is designed specifically for regression tasks. This is because the methods [1,2] are not directly applicable to object depth estimation as a regression task. To highlight the advantages of our approach over conventional contrastive losses, we modified two renowned contrastive learning techniques: SimCLR [1] and SupCon [2]. SupCon is specifically tailored to enhance feature learning by harnessing the information derived from the GT depth (refer to `R-Table 1` for details). Due to space constraints, we kindly ask you to refer to **A2-2** of **R1 (Wa1k)** for a comprehensive analysis of the experiment. ___ >**Q2.** The authors claimed that directly learning quasi-isometry would hurt sub-tasks. Despite such trade-offs being common in deep learning, I think this problem is a bit understudied. For instance, object features from objects that differ a lot in distance should be easily discriminated against. Would larger $K$ help with a larger $\epsilon$? Or is this design related to hard negative sampling? **A2.** Our quasi-isometric loss addresses the negative transfer problem in multi-task learning due to two primary reasons. First, the quasi-isometric loss inherently has a capable margin, denoted as $(K, B)$, which ensures the distinctiveness of other sub-task classifiers. To illustrate, consider two objects: one from the car class and the other from the pedestrian class. Even though both objects are at the same depth, our loss ensures their distinguishability. In contrast, other contrastive losses relying on GT depth tend to consistently aggregate feature points based solely on depth, neglecting the discriminability required for other sub-tasks. This is further described in `R-Table 1`. Second, our quasi-isometric loss benefits from the incorporation of a local distance-preserving condition. This ensures a structured arrangement of the feature manifold while maintaining its intricate overall shape. For instance, let's imagine the feature space being modeled by a subset of the circle manifold, as depicted in `R-Figure 2-(a)`. From a depth perspective, this manifold represents a structured feature space since the distance of all object feature pairs along the geodesic corresponds closely with depth distance. However, without the local distance-preserving condition, as shown in `R-Figure 2-(b)`, the quasi-isometric loss might erroneously infer that features $p4$ and $p5$ violate property norms. On the other hand, integrating the local distance-preserving condition denoted by $\epsilon$ (`R-Figure 2-(c)`) refines the quasi-isometric loss to only consider neighbor samples. This approach enables a more nuanced arrangement of the feature manifold while preserving the overall shape and non-linearity of the original feature space. In summary, the pre-defined hyperparameters $(K,B,\epsilon)$ play a crucial role in determining the strictness of the local quasi-isometric property-violated condition. As you rightly pointed out, increasing the value of $K$ can potentially relax the strict condition set by a larger $\epsilon$. However, it is essential to underscore that an excessively large $K$ cannot effectively promote depth discriminative features in objects. Moreover, an excessively large $\epsilon$ also harms the non-linearity of the feature manifold. For empirical clarity, we conducted an experiment with MonoCon [6] + $\cal{L}_{qi}$, setting $K=5.0$ and $\epsilon=\infty$, which we deliberately set to excessively large values. This resulted in a degraded performance, registering a car class mAP of $17.51$ (moderate), in comparison to the $17.84$ (moderate) attained by the standard MonoCon. ___ >**Q3.** Figure 2 seems interesting. However, I assume the pairs are also defined by the given $K, B,$ and $\epsilon$, which makes it less convincing. **A3.** We agree that the pre-defined hyper-parameters $(K, B, \epsilon)$ diminish the elegance of the proposed quasi-isometric loss. In the near future, we plan to consider solutions like automated parameter searches to address the issue. ___ >**Q4.** What is “Ours” in supplementary Table 3? Only $\cal{L_{qi}}$ or both $\cal{L_{qi}}$ and $\cal{L_{obj}}$? I assume when ablating $K, B,$ and $\epsilon$, you should not involve $\cal{L_{obj}}$ in the comparison? **A4.** We appreciate your attention to detail. In `S-Table 3`, "Ours" includes both $\cal{L_{qi}}$ and $\cal{L_{obj}}$. We revisited the experiments for `S-Table 3` and updated `R-Table 2` accordingly. Due to space constraints, we provided only partial hyperparameter setups but will include more details on additional hyperparameter setups in the revised version. Notably, the trends observed in the experimental results remain consistent. We identified a degradation in performance with excessively large values of $K$ or $\epsilon$. ___ >**Q5.** Since there is an auxiliary head for depth estimation, is the estimated depth more accurate with the quasi-isometric loss? It would be good to compare the trade-off between depth estimation and “other sub-tasks” when $\epsilon$ changes. **A5.** In `M-Table 4`, the ablation study for $\cal{L_{obj}}$ and $\cal{L_{qi}}$ shows that the marginal performance gains closely match the cumulative ones, highlighting improved depth estimation. In response to your feedback, we have conducted additional experiments adjusting epsilon and detailed the findings in `R-Table 5` to examine its impact on depth estimation relative to other tasks. ___ --- Rebuttal Comment 1.1: Title: Final Rating Comment: Thank the authors for preparing the additional experimental results. They addressed my concerns and I believe the results and discussions would a good addition to the paper. I think this is a strong submission and recommend "7: Accept".
Summary: The paper proposed a new approach to monocular 3D object detection. The critical contribution of the work is the application of metric learning to improve depth estimation and an additional head with auxiliary depth prediction. The resulting approach improves 3D object detection accuracy without increasing inference time, model size, or additional data. The proposed method utilizes a metric learning scheme that preserves the geodesic distance between depth information and the feature space. This scheme encourages the model to extract depth-discriminative features without negatively impacting other non-depth tasks (object size, type and etc.). An auxiliary head is also introduced to enhance depth estimation, adapting from [20]. This auxiliary head improves the quality of depth estimation without impacting inference time, ensuring efficient performance. The experimental evaluations conducted on the KITTI and Waymo datasets demonstrate the effectiveness of the proposed method. The results consistently show solid improvements in performance across various monocular 3D object detection methods. ------ Updated my score after rebuttal. Strengths: - The paper is well-written and easy to follow. The authors did a great job describing the quasi-isometric concept, its mathematics, and how the proposed loss enforces them. - The paper is well-intuitive. The different asks in 3D object detection can create conflict in feature extraction, negatively impacting each task. The idea of using metric learning to provide a feature manyfold that balances the discrimination of depth and local structure for other tasks is promising. - The related work section is comprehensive, covering different related areas. - The experiment demonstrates the strength of the proposed method and generalizability on different baselines and datasets. Ablation analysis is done on the two key components to show their effectiveness and contribution. Weaknesses: - The paper provides two essential components contributing to the final improvement. But I don't think the second auxiliary head contains a lot of novelty compared to related works such as MonoCon. This limits the novelty of the paper to some extent. - It's great that the paper compared to SupCR in ablation analysis. However, there is a missing discussion from a more theoretical standpoint on why the proposed loss is better than contrastive loss. In the end, both of the losses seem to be able to maintain a certain local distance for the other tasks. - Part of the parameter-sensitive analysis should be moved to the main paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Can I ask why not use nuScenes, which seems to be more popular in monocular 3D detection in the past few years? - How does P correspond to the feature map h in section 3.2? Is P the feature space that the map predicts? Or is a different one going through some MLP? - Is there any systematic way for parameter choosing? Even just a starting set? - Have the authors tried other metric learning methods that didn't seem to work? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The parameter of the loss seems to be quite sensitive to the final performance. Would it be helpful to provide guidelines or an algorithm for parameter initialization? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >**Q1.** I don't think the second auxiliary head contains a lot of novelty compared to related works such as MonoCon. This limits the novelty of the paper to some extent. **A1.** We acknowledge your concern regarding the novelty of the auxiliary head. While it may not present a significant technical novelty, it uses an object-wise depth map as a subordinate task to mitigate errors in a depth task stemming from center shifting. This is because CenterNet-based models rely on the exact center point pixel, assuming objects as points. ___ >**Q2.** There is a missing discussion from a more theoretical standpoint on why the proposed loss is better than contrastive loss. In the end, both of the losses seem to be able to maintain a certain local distance for the other tasks. **A2.** Our quasi-isometric loss addresses the negative transfer problem in multi-task learning due to two primary reasons. First, the quasi-isometric loss inherently has a capable margin, denoted as $(K, B)$, which ensures the distinctiveness of other sub-task classifiers. To illustrate, consider two objects: one from the car class and the other from the pedestrian class. Even though both objects are at the same depth, our loss ensures their distinguishability. In contrast, other contrastive losses relying on GT depth tend to consistently aggregate feature points based solely on depth, neglecting the discriminability required for other sub-tasks. This is further described in `R-Table 1`. Second, our quasi-isometric loss benefits from the incorporation of a local distance-preserving condition. This ensures a structured arrangement of the feature manifold while maintaining its intricate overall shape. For instance, let's imagine the feature space being modeled by a subset of the circle manifold, as depicted in `R-Figure 2-(a)`. From a depth perspective, this manifold represents a structured feature space since the distance of all object feature pairs along the geodesic corresponds closely with depth distance. However, without the local distance-preserving condition, as shown in `R-Figure 2-(b)`, the quasi-isometric loss might erroneously infer that features $p4$ and $p5$ violate property norms. On the other hand, integrating the local distance-preserving condition denoted by $\epsilon$ (`R-Figure 2-(c)`) refines the quasi-isometric loss to only consider neighbor samples. This approach enables a more nuanced arrangement of the feature manifold while preserving the overall shape and non-linearity of the original feature space. ___ >**Q3.** Parameter-sensitive analysis should be moved to the main paper. **A3.** Thank you for your constructive feedback. We will report the parameter-sensitive analysis in the revised version. ___ >**Q4.** Can I ask why not use nuScenes, which seems to be more popular in monocular 3D detection in the past few years? **A4.** Many monocular 3D object detection studies have favored evaluation on the KITTI [7] and Waymo [8], with fewer focusing on the nuScenes [4]. Heeding your suggestion, we have now expanded our evaluation to include the nuScenes dataset, as presented in `R-Table 4`. The proposed method achieved a $2.8$% improvement in MAE, and a notable $12.3$% enhancement in Car AP, resulting in an overall performance boost of $7.6$%. These consistent gains across all datasets underline the robustness and generalization ability. ___ >**Q5.** How does P correspond to the feature map h? Is P the feature space that the map predicts? Or is a different one going through some MLP? **A5.** We directly extract the object feature $\rho$ from the feature map $\mathbf{h}$ by referencing the spatial coordinate of GT coarse projected 3d center $(u,v)$. ___ >**Q6.** Is there any systematic way for parameter choosing? Even just a starting set? **A6.** While the systematic approach to parameter selection was not a focal point of the main paper, we did identify two critical observations that assist in this process: First, for datasets with a large number of objects, selecting a stricter $B$ can enhance performance. This is achieved by approximately preserving the pseudo-geodesic distance in the $\mathbf{P}$-space. (Refer to Section B in supplementary materials) Second, as mentioned in the main paper, both excessively small or large values of $\epsilon$ can hinder representation learning. Specifically, a small $\epsilon$ risks sampling an insufficient number of property-violated object pairs. On the other hand, a large $\epsilon$ can compromise the non-linearity of the feature manifold within $\mathbf{P}$-space. We suggest beginning with the parameter set of $(K=1.5, B=0.5, \epsilon=10.0)$, as our experiments have shown that this configuration consistently boosts performance across multiple baselines and datasets. We acknowledge that the pre-defined parameters may detract from the robustness of our method. However, future research could address this issue, possibly through methods like automated parameter search, among other potential solutions. ___ >**Q7.** Have the authors tried other metric learning methods that didn't seem to work? **A7.** In our main paper, we excluded results from conventional metric learning methods like SimCLR [1] and SupCon [2] since they are primarily tailored for classification tasks. Instead, we chose to compare ours with SupCR [3], which is designed specifically for regression tasks. This is because the methods [1,2] are not directly applicable to object depth estimation as a regression task. To highlight the advantages of our approach over conventional contrastive losses, we modified two renowned contrastive learning techniques: SimCLR [1] and SupCon [2]. SupCon is specifically tailored to enhance feature learning by harnessing the information derived from the GT depth (refer to global response for details). Due to space constraints, we kindly ask you to refer to **A2-2** of **R1 (Wa1k)** for a comprehensive analysis of the experiment. ___ --- Rebuttal Comment 1.1: Comment: I appreciate the authors' effort in the rebuttal—the rebuttal address most of my concerns. I want to increase my rating based on the explanation and the new results.
Rebuttal 1: Rebuttal: We thank all reviewers for their constructive comments. For your convenience, please download and assess the **attached PDF**. To simplify cross-referencing, figures, and tables in the main paper, supplementary materials, and rebuttal paper are denoted as `M-[Table X/Figure X]`, `S-[Table X/Figure X]`, and `R-[Table X/Figure X]`, respectively. Below, we detail additional experiments and visualizations conducted per reviewers' requests: ___ >In response to **R1 (Wa1k)**, we included visualization results of the MonoCon [6] feature space learned by both our proposed quasi-isometric loss method and other metric learning methods using PCA in `R-Figure 1`. The points and their corresponding colors represent the projected object feature points and ground-truth (GT) depths, respectively. ___ >Addressing the inquiries from **R2 (6BaQ)** and **R3 (nNTt)**, we introduced `R-Figure 2` to explain the theoretical standpoint of the proposed quasi-isometric loss. In this figure, $z/p$ and $\Delta \text{z}/\Delta \text{p}$ represent the depth/object-feature and Minkowski distance of the depth/object-feature pair, respectively. Specifically, $\Delta \text{z} = d_1(z_1, z_i), \Delta \text{p} = d_2(p_1, p_i)$, with $i = \{2,3,4,5\}$ and $d_1(\cdot, \cdot), d_2(\cdot, \cdot)$ as the Minkowski distance metrics. In `R-Figure 2-(b)` and `R-Figure 2-(c)`, the white areas represent property-satisfied zones where properties are met, while the gray areas indicate ignore zones where representation learning between object pairs is hindered due to the constraints of the $\epsilon$-ball. ___ >Based on the feedback from **R1 (Wa1k)**, **R2 (6BaQ)** and **R3 (nNTt)**, we added `R-Table 1` to compare the proposed quasi-isometric loss with other previous contrastive losses. Since existing contrastive losses, excluding SupCR [3], are designed for classification, we adjusted each contrastive loss for object depth estimation, which is a regression problem, and a sub-task of monocular 3d object detection: >>$\cal{L_{\text{SimCLR}}}$ [1]: We select all possible object feature pairs as negative pairs, excluding identical entity object features. > >>$\cal{L_{\text{SupCon}}}$ [2]: We segmented the continuous depth interval into 5-meter bins, treating them as classes, and then trained the backbone and classifier with the original SupCon loss. > >>$\cal{L_{\text{SupCon v2}}}$: We initially trained the backbone feature space with only $\cal{L_\text{SupCon}}$, and subsequently trained the task classifiers with $\cal{L_\text{baseline}}$ loss while freezing the backbone. > >>$\cal{L_{\text{SupCR}}}$: We use the same experimental setup as in the main paper. > >For a fair comparison, we adopted NT-Xent for loss form and cosine similarity as the metric. All existing contrastive learning methods make the positive pairs via two-view augmentation techniques, specifically horizontal flipping and color jittering. ___ >In response to **R3 (nNTt)**, we revised and presented the performance comparison based on hyperparameters $K, B,$ and $\epsilon$ in `R-Table 2`. ___ >Responding to **R5 (eeNQ)**, we applied our quasi-isometric loss to a BEV-based 3D object detection method, ImVoxelNet [9]. The evaluation results on the KITTI [7] **$\textit{validation}$** set can be found in `R-Table 3`. While the implementation details for ImVoxelNet remain consistent with those mentioned in the paper, we made a modification in the batch size due to GPU memory constraints. Specifically, the batch size of ImVoxelNet was adjusted to 4, which is half of its original batch size. ___ >Per the requests from **R2 (6BaQ)** and **R5 (eeNQ)**, we performed additional experiments on the nuScenes dataset [4] with the proposed method. The results are in `R-Table 4`. We used a split [5] comprising 28,130 train and 6,019 validation images from the front camera. The split is convertible to the KITTI format using the script [10], facilitating comparison between MonoCon and MonoCon+Ours on the nuScenes dataset. Metrics applied include the Mean Absolute Error (MAE) of the depth of the bounding boxes and Average Precision [4] (AP), defined by matching through thresholding the 2D center distance (d) on the ground plane, rather than using Intersection over Union (IoU). ___ >Following **R3 (nNTt)**'s suggestion, we added the performance metrics of object depth estimation with varying epsilon values and the trade-off for other sub-tasks in `R-Table 5`. ___ $\mathbf{References}$ [1] *Chen, Ting, et al. "A simple framework for contrastive learning of visual representations." ICML 2020.* [2] *Khosla, Prannay, et al. "Supervised contrastive learning." NIPS 2020.* [3] *Zha, Kaiwen, et al. "Supervised Contrastive Regression." arXiv (2022).* [4] *Caesar, Holger, et al. "nuscenes: A multimodal dataset for autonomous driving." CVPR 2020.* [5] *Shi, Xuepeng, et al. "Geometry-based distance decomposition for monocular 3d object detection." ICCV 2021.* [6] *Liu, Xianpeng, Nan Xue, and Tianfu Wu. "Learning auxiliary monocular contexts helps monocular 3D object detection." AAAI 2022.* [7] *Geiger, Andreas, Philip Lenz, and Raquel Urtasun. "Are we ready for autonomous driving? the kitti vision benchmark suite." CVPR 2012.* [8] *Sun, Pei, et al. "Scalability in perception for autonomous driving: Waymo open dataset." CVPR 2020.* [9] *Rukhovich, Danila, Anna Vorontsova, and Anton Konushin. "Imvoxelnet: Image to voxels projection for monocular and multi-view general-purpose 3d object detection." WACV 2022.* [10] https://github.com/nutonomy/nuscenes-devkit/blob/master/python-sdk/nuscenes/scripts/export_kitti.py Pdf: /pdf/d97babcfa0da7af73650ad8566a3e010db8ee02c.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a metric learning scheme to learn depth-discriminative features for object depth prediction, which helps improve the overall task of monocular 3D object detection, without negatively impacting the performance of the other sub-tasks (e.g., object class, bounding box size) wherein. Specifically, they employ a distance-preserving function and the proposed ($K, B, ε$)-quasi-isometric loss to arrange the feature space manifold in accordance with ground-truth object depth, while preserving the non-linearity of the natural feature manifold. They also introduce an auxiliary head (in training) for object-wise depth estimation to enhance the depth quality. Experiments on datasets KITTI and Waymo show that the proposed method can be incorporated into several 3D object detection backbones for improvement. Strengths: - The proposed $(K, B, ε)$-quasi-isometric loss is mathematically explained to help learn depth-discriminative features. - Experiments on KITTI and Waymo show the effectiveness of the proposed method, and the ablation studies are well-designed for verification including several backbones and comparison with other SOTA (e.g., SupCR). - The paper is well written, including the problem definition, the purpose, and the background. The equations and the detailed notations help to understand the method. Weaknesses: See **Questions** below. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Are those hyper-parameters $K$, $B$ and $\epsilon$ easy to find in pratice? Will different backbone architectures require different setups of $K$, $B$, and $\epsilon$? - According to Eq 6, the proposed Quasi-isometric loss is implemented as the contrastive loss. 1) Are there any restrictions on the absolute number and the relative ratio of positive/negative samples/anchors? 2) Is there any direct comparison between the proposed Quasi-isometric loss with the naive contrastive loss for metric learning in the task of monocular 3D object detection? - It would be helpful to show some visualization results of the learned features. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - N/A for the limitations. - I suggest the authors provide the failure cases (and the corresponding explanations) of the proposed losses on monocular 3D object detection. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >**Q1.** Are those hyper-parameters $K, B,$ and $\epsilon$ easy to find in practice? Will different backbone architectures require different setups of $K, B,$ and $\epsilon$? **A1.** Finding the *"Optimal"* hyper-parameters can be challenging across diverse backbones and datasets. However, our experiments spanning multiple datasets and baselines consistently used the hyperparameters: $(K=1.5, B=0.5, \epsilon=10.0)$. These settings consistently enhanced the 3D object detection performance across various setups. ___ >**Q2.** According to Eq 6, the proposed Quasi-isometric loss is implemented as the contrastive loss. - Are there any restrictions on the absolute number and the relative ratio of positive/negative samples/anchors? - Is there any direct comparison between the proposed Quasi-isometric loss with the naive contrastive loss for metric learning in the task of monocular 3D object detection? **A2-1.** Our quasi-isometric loss does not have rigid constraints. Most 3D detection networks, like MonoCon [6], cap at 30 objects per image. If there are no positive/negative samples, the loss returns zero, courtesy of the NT-Xent loss design. **A2-2.** Since object depth estimation is a regression-oriented task, most conventional contrastive learning methods aren't directly applicable, with the exception of SupCR [3]. Upon reviewers' requests, we adapted the two prominent contrastive learning schemes, SimCLR [1] and SupCon [2], which were originally designed for classification tasks, to enable feature learning. SupCon is specifically tailored to enhance feature learning by harnessing the information derived from the GT depth (refer to global response for details). Same as the proposed loss $\cal{L_{qi}}$, all contrastive losses were applied to object feature $\rho$. Our results in `R-Table 1` highlight the comparison of the proposed $\cal{L_{qi}}$ to the modified contrastive losses. A detailed description of each contrastive loss implementation can be found in the global response. Regarding $\cal{L_\text{SimCLR}}$, it underperforms the baseline substantially. This is because $\cal{L_\text{SimCLR}}$ focuses on extracting discriminative features of the object even when the object features share identical GT depth. When employing $\cal{L_\text{SupCon}}$ or $\cal{L_\text{SupCR}}$ as the loss function, there is a marked improvement in performance over $\cal{L_\text{SimCLR}}$. This underscores the advantage of integrating depth label information during the training phase. It is noteworthy that $\cal{L_\text{SimCLR}}$ is self-supervised and does not leverage depth label information. We experimented with SupCon's approach denoted as $\cal{L_\text{SupCon v2}}$, where only $\cal{L_\text{SupCon}}$ was used to train the backbone feature space in the early stage. Later stages saw the training of task classifiers with $\cal{L_\text{baseline}}$ while freezing the backbone. But this setup fails to predict the 3D bounding box because the other sub-task classifiers of the model could not differentiate the pre-set feature space that was defined solely by depth. In conclusion, the method incorporating our quasi-isometric loss $\cal{L_{qi}}$ significantly surpasses those utilizing other contrastive losses, demonstrating an impressive margin of $+7.6$%p to $+23.6$%p. This is attributed to our quasi-isometric loss, which prioritizes neighboring samples and fine-tunes the feature manifold, all the while conserving its original shape and the intrinsic non-linearity derived from various tasks. For further insights, kindly refer to **A2** in response to **R2 (6BaQ)**. ___ >**Q3.** It would be helpful to show some visualization results of the learned features. **A3.** Following your suggestion, we included the visualization results of the feature spaces learned by our proposed method, alongside those derived from various metric learning methods in `R-Figure 1`. For clarity, the points represent projected object feature points, and their associated colors indicate depth GT values. ___
null
null
null
null
null
null
PanoGRF: Generalizable Spherical Radiance Fields for Wide-baseline Panoramas
Accept (poster)
Summary: They describe a novel method for panoramic view synthesis given wide-baseline panoramas as input. They first obtain depth maps for the input panoramas use a custom spherical depth estimation method guided by a pre-trained monocular depth estimator. They then extract geometry and appearance features from the input panoramas. To render a new panorama, they sample points along the viewing rays, project them into the feature grids, aggregate features using an estimated visibility term, decode the aggregated features into color+density, and compute final color using volume rendering. They outperform recent generalizable view synthesis methods on Matterport3D, Replica, and Residential datasets. Strengths: They combine many ideas from prior work on perspective and spherical view synthesis but synthesizing these ideas into a novel approach. The closest work is probably NeuRay, which is a generalizable radiance field representation for perspective images. The main differences to NeuRay are that they work in spherical coordinates and they use monocular depth estimation to contribute to the visibility prediction and depth-guided sampling. Their evaluation is thorough and includes a comparison to a reasonable selection of recent view synthesis methods, including some designed for panoramas and some adapted to panoramas by applying the method to each side of a cube map. Quantitatively they show a large improvement in rendering quality and this is also evident from the example images in the figures and the videos in the supplementary. They also include an ablation study in which they verify the usefulness of including 360 monocular depth and multi-view stereo in the method. Another contribution is the insight (reported in the supplemental) that fine-tuning did not lead to an improvement in rendering quality in their experiments, contrary to what was reported by NeuRay for example. Instead fine-tuning led to overfitting. They hypothesize that this is due to the use of wide-baseline images. This is an interesting result and points to potential future work to overcome this limitation. Weaknesses: In many ways the work is an adaptation of NeuRay to the spherical case. It could be argued that the work is somewhat incremental in that sense. However, their results for panoramic view synthesis are excellent and there is not a lot of previous work on wide-baseline panoramic view synthesis. Because the name includes GRF, I would think it appropriate to mention and cite GRF which is an earlier general radiance field approach which is a precursor to NeuRay: Trevithick, A., & Yang, B. (2021). GRF: Learning a general radiance field for 3d representation and rendering. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 15182-15192). Another weakness is that the provided video only shows planar movement. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: The videos only demonstrate camera movement along a line with no vertical movement. Does the method support complete 6DOF motion or is limited to planar motion? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The discussion of limitations is rather brief. They could explore failure cases more and highlight cases where the view synthesis result could be improved. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer WLFo: Thank you for the review and comments. We are glad to see your positive comments and score. We hope the following responses could solve your concerns. * For Weakness1: >In many ways the work is an adaptation of NeuRay to the spherical case. It could be argued that the work is somewhat incremental in that sense. However, their results for panoramic view synthesis are excellent and there is not a lot of previous work on wide-baseline panoramic view synthesis. We appreciate your thoughtful comments. Although our method is inspired by using NeuRay to the spherical case, it goes much more beyond that. To address the occlusion issue, we additionally employ 360&deg; monocular depth to guide the spherical depth sampling in sphere sweeps of 360&deg; MVSNet. More insights behind our design can be seen in the global response. * For Weakness 2: >Because the name includes GRF, I would think it appropriate to mention and cite GRF which is an earlier general radiance field approach which is a precursor to NeuRay... Thanks for your nice suggestion. We will mention and cite GRF in the related work. * For Question: >The videos only demonstrate camera movement along a line with no vertical movement. Does the method support complete 6DOF motion or is limited to planar motion? Yes, PanoGRF support complete 6DOF motion and is not limited to planar motion. To validate this point, we conducted an additional experiment where we synthesized novel views at positions 0.25 and 0.5 meters above the middle point between two input viewpoints. The input viewpoints are 1.0 meters apart. We compared the quantitative results of our method with NeuRay's using various indicators, as shown in the table. Our results consistently outperformed NeuRay's, demonstrating that our method can achieve better results than NeuRay beyond the camera baseline. For the qualitative comparison, please refer to the attached PDF of our "global" rebuttal. |distance|method|PSNR|WS-PSNR|SSIM|LPIPS| |:----:|:----:|:----:|:----:|:----:|:----:| |0.25m|NeuRay|20.66|20.05|0.714|0.409| |0.25m|PanoGRF|**21.98**|**21.30**|**0.763**|**0.348**| |0.5m|NeuRay|20.39|19.94|0.711|0.420| |0.5m|PanoGRF|**21.99**|**21.42**|**0.769**|**0.349**| * About limitation: >The discussion of limitations is rather brief. They could explore failure cases more and highlight cases where the view synthesis result could be improved. Great thanks for your suggestion! The qualitative result of the failure case can be seen in the attached PDF of our "global" rebuttal. In some viewpoints, a previously occluded area may be visible, and since this area has not been seen in either of the two existing viewpoints, the synthesis of this area is not effective. The area, which has not been seen in either of the existing viewpoints, may be able to be filled in by combining with the existing diffusion generative approach. This is the next direction we plan to investigate. Best, Authors --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their responses. I have read over the other reviews and the authors' responses. They have addressed the concerns well and I still would recommend acceptance for this paper. --- Reply to Comment 1.1.1: Title: Reply of Authors Comment: Dear Reviewer WLFo: Great thanks for your positive feedback! Sincerely, Authors
Summary: This work tackels large-baseline (up to 2 meters) multi-view stereo for panorama images. Extending NeuRay framework, the density and raidiance field are estimated based on the feature extracted from nearby views following panorama projection. To predict stereo depth as geometric feature, this work propose to use 360 monocular depth estimation model to sample more sphere sweeps close to the surface. All the features are extracted or estimated by 2D/3D CNN and can be trained from many scenes for generalization. Strengths: The quantitative and qualitative results are better than the compared baselines. Weaknesses: **Adapting from perspective cubemap to equirectangular projection is not a big contribution.** One of the main claimed disadvantage of some of the baseline methods (i.e., IBRNet and NeuRay in this work) is that their original codebases work on perspective instead of panoramic imagery (L36-38). However, adapting perspective projection to equirectangular projection is trivial. Some common adaptation like circular padding is easy to add as well. Many deep-learning based models also show promising quality by adapting CNN on equirectangular panorama like LayoutNet[Zou et al] and DuLa-Net[Yang et al], just name a few. With these simple adaptation, the baseline methods can also be directly trained on equirectangular panorama without the claimed disadvantage. **Some design choices are not well justified and discussed.** Please refer to question 1,2,3 for detail. **Paper writing** - Sec.3.1 L122-123. $w_i$ is not the accumulated transmittance. Only the $\prod_{k=1}^{i-1} (1 - \alpha_k)$ is. - The $w$ is duplicated in Eq.1 and Eq.4 but they are totally different things. I suggest using different notations. - Table.1 and Table.2. It's better to directly use a table footnote describing the "*" means the models are trained with perspective cubemap projection and evaluated with equirectangular projection. - L140. "When" should be "when". Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: 1. Why constraint the proposed method on aggregating only 2 panorama views (Eq.3)? The original NeuRay have tried many different number of working views and find 8 nearyby views achieves saturated quality. Experiment to justify the design choice in this work is missing. 2. The search space of the stereo depth candidacy is scene dependent and crucial. How is the $\beta$ in L215 determined? If applying the proposed method to other datasets, is there any rule of thumb to determine the $\sigma$ based on the scene scales? 3. In supp's Table.3. It's strange that the more fine-grained is the sampling (i.e., more $N_{mono}$) the worse results the proposed method can achieve. The paper should discuss about this experiment. Besides, what if we using even less $N_{mono}$? Given that the best result is achieved by the minimum $N_{mono}=5$ setting in Table.3, I think the ablation to find the optimum $N_{mono}$ is incomplete. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: The discussed limitation looks reasonable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 74Ux: Thanks for the review and comments. We hope the following responses could solve your concerns. * For Weakness 1: >Adapting from perspective cubemap to equirectangular projection... We illustrate the key differences between PanoGRF and NeuRay from many aspects in the "global" response. Please refer to the more detailed clarification in our global response. The novel view synthesis task for wide-baseline panoramic images is challenging. First, input views are sparse. Second, the occlusion issue is severe. To tackle the first challenge, we develop generalizable spherical radiance fields that incorporate 360&deg; data priors into the spherical NeRF. For the second challenge, we introduce 360&deg; monocular depth into PanoGRF. We build up PanoGRF based on our key observations. NeuRay can only work on perspective images. If the perspective slices of the panoramic image are input, incorrect features are easily aggregated during the feature aggregation process due to the limited field of view of the perspective image. To solve this problem, we use spherical projection for feature alignment, maintaining a spherical image representation. Secondly, NeuRay relies on planar depth, while PanoGRF relies on spherical depth. Spherical depth can provide more robust geometric features, avoid feature alignment issue similar to that mentioned earlier, and offer a full field of view and better continuity. We also observed that ordinary 360&deg; MVSNet based on merely multi-view matching cannot handle the serious occlusion problem under a wide baseline. We use 360 monocular depth to guide the deep sampling process of spherical sweeps to introduce strong single-view priors. Furthermore, our method is simple and highly effective. Adapting perspective to equirectangular is not trival. Many papers, such as [1, 2], worked on this. Besides, we also designed the additional targeted methods for improvements, as mentioned above. We also have attempted to improve the performance of CNN in PanoGRF to alleviate spherical distortion by introducing part of the methods [1, 2]. But these methods make almost no difference and increase the complexity of PanoGRF for 360&deg; view synthesis task. What truly works in the synthesis of wide-baseline panoramic views are our proposed solutions. * For Weakness 2: >...Please refer to question 1,2,3 for detail. We answer your questions in subsequent paragraphs. * For Weakness 3: >Paper writing... Thanks for your correction. According to your suggestions, we will correct these mistakes in the main body of our paper. * For Question 1: >Why constraint the proposed method on aggregating only 2 panorama views... We utilize two wide-baseline panoramic views as inputs because our research focuses on the camera baseline size rather than the number of panoramic images. Therefore, we conducted comparative experiments under varying camera baselines. However, our method is not limited to two panoramas. To further verify the effectiveness of our method, we conducted comparative experiments with NeuRay using multiple panoramic image as inputs. We placed the four input viewpoints at the corners of a horizontal square and tested the rendering performance at the center viewpoint, as well as other viewpoints located at -0.4, -0.2, -0.1, -0.05, 0.05, 0.1, 0.2, and 0.4 meters in the vertical direction from the center viewpoint. The diagonal length of the square is 1.0 meters. The table below presents the quantitative comparison results. As shown, PanoGRF still largely outperforms NeuRay with multiple panoramic inputs. The qualitative results can be found in the attached PDF of the global response. |method|PSNR|WS-PSNR|SSIM|LPIPS| |:----:|:----:|:----:|:----:|:----:| |NeuRay|27.82|26.74|0.8614|0.2312| |PanoGRF|**28.99**|**27.91**|**0.8762**|**0.2071**| * For Question 2: >...How is the \beta in L215 determined... $\beta$ is taken to be 3 because [$\mu$-3*$\sigma$, $\mu$+3*$\sigma$] is a frequently used interval in Gaussian sampling in the vicinity of the expectation value. The value $\sigma$=0.5 is obtained from experiments on Matterport3D, where we tested the values 0.1, 0.5, 1.0, and 1.5, with 0.5 proving to be the most effective. In general cases, this value does not require adjustment for indoor scenes. However, if the depth scale of a scene is particularly large, this value may need adaptation. For instance, $\sigma$ can be set as 1/20 of the maximum depth of the scene. * For Question 3: >In supp's Table.3. It's strange that ... Thanks for your suggestion! We added the experiment as follows: |$N_{mono}$|L1|L2|RMSE|WS-L1|WS-L2|WS-RMSE| |:----:|:----:|:----:|:----:|:----:|:----:|:----:| |1|0.1586|0.2498|0.4317|0.1745|0.1971|0.3937| |3|**0.1432**|**0.1993**|**0.3865**|0.1529|0.1649|0.3580| |5|0.1441|0.2047|0.3877|**0.1502**|**0.1624**|**0.3546**| |7|0.1496|0.2252|0.4104|0.1640|0.1896|0.3832| |9|0.1645|0.2912|0.4511|0.1752|0.2215|0.4059| The reliability of 360&deg; monocular depth estimation is not perfect. Therefore, our paper employs uniform sampling to compensate for the remaining depth candidates. In the ablation experiments, we consistently maintain $N_{mono}+N_{uni}=64$, where $N_{uni}$ represents the sample number of uniform distribution. We discovered that the configuration $N_{mono}=5$ yields the best results with regard to the metrics of WS-L1, WS-L2, and WS-RMSE. Sincerely, Authors [1] Jiang, H., Sheng, Z., Zhu, S., Dong, Z., & Huang, R. (2021). Unifuse: Unidirectional fusion for 360 panorama depth estimation. IEEE Robotics and Automation Letters, 6(2), 1519-1526. [2] Li, Y., Guo, Y., Yan, Z., Huang, X., Duan, Y., & Ren, L. (2022). Omnifusion: 360 monocular depth estimation via geometry-aware fusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2801-2810). --- Rebuttal Comment 1.1: Title: Would you give us a response? Comment: Dear Reviewer 74Ux: We are sorry to bother you. We sincerely thank you for the review and comments. We have provided corresponding responses and results, which we believe have covered your concerns. As the author-reviewer discussion is approaching the deadline, we hope to receive your response before the deadline. If you have any other questions, we are willing to discuss them with you at any time. Sincerely, Authors
Summary: This paper presents a method called PanoGRF for synthesizing novel panoramas using two wide-baseline panoramas, with the incorporation of 360 scene priors into Spherical NeRF to generate new views. The method involves extracting appearance and geometry features from the input panoramas and estimating spherical depths through convolutions. To enhance the accuracy of 360 panoramas MVS depth estimation, the authors integrated a 360 monocular depth estimation network. Specifically, they employed a Gaussian distribution to sample depth candidates around the estimated 360 monocular depth. Extensive experiments were conducted to demonstrate the effectiveness of the proposed method in synthesizing novel views from wide-baseline panorama stereos. Strengths: The authors have performed thorough experiments to showcase the effectiveness of the proposed method, making the study comprehensive. The results obtained from the performance evaluation and ablation study are noteworthy, highlighting that the proposed method achieves state-of-the-art performance. The contributions of the proposed components in achieving this success are significant. Moreover, the paper is excellently written and presents the concepts in a clear and easily understandable manner. The figures are well-drawn, effectively conveying the main ideas and providing visual clarity to the readers. Weaknesses: The level of novelty in this paper is relatively limited since many of the components utilized have been proposed by previous methods. For instance, the appearance and geometry feature aggregation techniques are adopted from NeuRay, while the 360 MVS approach, which involves cost volume and mono-guided depth sampling, can be attributed to [14, 12, 28]. Furthermore, it is not particularly challenging to adapt these techniques to panoramas by employing spherical projection. It is worth noting that the comparison with IBRNet and NeuRay may not be entirely fair due to the differing assumptions regarding the number of input images. Both IBRNet and NeuRay assume multiple images as inputs, while this paper solely employs two images in its experiments. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Although all experiments in the paper demonstrate the method's capability to generate views along the baseline of two cameras, the design of the proposed method suggests the potential to generate views outside of these limited regions. It would be valuable to investigate how the method performs in synthesizing views beyond the baseline. By conducting additional experiments to test the boundary and assess the performance of the proposed method in generating views outside the baseline, a more comprehensive understanding of its capabilities can be gained. This would be a beneficial addition to further validate and explore the boundaries of the proposed method. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 1 poor Limitations: The authors have mentioned the limitations and societal impact of the paper in lines 273-278. It well covered the possible limitations and potential societal impact of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your effort for reviewing our paper. We hope the following responses could solve your major concerns. * For weakness 1: >The level of novelty... PanoGRF is not a simple combination of NeuRay and the 360&deg; novel view synthesis task. For a more detailed clarification, please refer to our global response. The novel view synthesis task for wide-baseline panoramic images is indeed highly challenging. First, input views are sparse. Second, the occlusion issue is severe. To address the first challenge, we develop generalizable spherical radiance fields that incorporate 360&deg; data priors into the spherical NeRF, preventing the overfitting problem. For the second challenge, we introduce 360&deg; monocular depth to guide the spherical depth sampling of 360&deg;MVSNet. It is also important to note that PanoGRF is not simply the combination of NeuRay and spherical projection. We built PanoGRF based on our key observations. As we metioned in the global response, NeuRay can only work on perspective images. If the perspective slices of the panoramic image are input, incorrect features are easily aggregated during the feature aggregation process due to the limited field of view of the perspective image. To solve this problem, we use spherical projection for feature alignment, maintaining a spherical image representation, and fully leveraging the omnidirectional field of view provided by panoramas. Secondly, NeuRay relies on planar depth, while PanoGRF relies on spherical depth. For generalizable renderers of panoramic images, spherical depth can provide more robust geometric features, avoid feature alignment issue similar to that mentioned earlier, and offer a full field of view and better continuity. We also observed that ordinary 360MVSNet based on merely multi-view matching cannot handle the serious occlusion problem under a wide baseline. In this regard, we proposed a feasible solution. Specifically, we use 360 monocular depth to guide the deep sampling process of spherical sweeps to introduce strong single-view priors. Furthermore, our method is simple and highly effective, as demonstrated in the extensive comparative experiments (Sec.4.3 of the main text, Sec.E and Sec.F of the supplementary materials). These experiments validate that our solutions are crucial for wide-baseline panoramas. * For Weakness 2: >...the comparison with IBRNet and NeuRay may not be entirely fair... We would like to clarify that we utilize two wide-baseline panoramic views as inputs because our research focuses on the camera baseline size rather than the number of panoramic images. By ensuring that all experiments have two panoramic image inputs, we facilitate the execution of the experiments. Correspondingly, we split each panorama into cubemaps, with six sides per panorama, and then use all the cubemaps as inputs for NeuRay/IBRNet. Consequently, NeuRay/IBRNet receives twelve perspective images as inputs. This comparison remains fair under these input conditions. However, our method is not limited to two panoramas. For instance, when rendering a test view in the renderer module, we use $N$ input panoramas as reference images, and the renderer does not need to be modified. In the 360&deg; spherical depth estimator module, for each reference image, we use the other $N-1$ input panoramas as source images. We average the multiple cost-volumes obtained during 360&deg; multi-view matching process between the reference image and each source image. The rest is unchanged. In this way, our method can be applied to the multi-view panoramic inputs. To further verify the effectiveness of our method, we conducted comparative experiments with NeuRay using multiple panoramic image as inputs. We placed the four input viewpoints at the corners of a horizontal square and tested the rendering performance at the center viewpoint and other viewpoints located at -0.4, -0.2, -0.1, -0.05, 0.05, 0.1, 0.2, and 0.4 meters in the vertical direction from the center viewpoint. The diagonal length of the square is 1.0 meters. The table below presents the quantitative comparison results between PanoGRF and NeuRay. As shown, PanoGRF still largely outperforms NeuRay with multiple panoramic inputs. The qualitative results can be found in the attached PDF of the global response. |method|PSNR|WS-PSNR|SSIM|LPIPS| |:----:|:----:|:----:|:----:|:----:| |NeuRay|27.82|26.74|0.8614|0.2312| |PanoGRF|**28.99**|**27.91**|**0.8762**|**0.2071**| * For Question: >...how the method performs in synthesizing views beyond the baseline... Thanks for your valuable suggestion! We conducted an additional experiment where novel views were generated at positions 0.25 and 0.5 meters above the middle point between two input viewpoints. The input viewpoints are 1.0 meters apart. We compared the quantitative results of our method with NeuRay's as shown in the table below. PanoGRF consistently surpassed NeuRay's performance, indicating its capacity to yield superior results beyond the camera baseline. For the qualitative comparison, please refer to the attached PDF file of our "global" rebuttal. We also present the failure case of PanoGRF under such condition in the attached PDF file. In some viewpoints, a previously occluded area may be visible and since this area has not been seen in either of the two existing viewpoints, the synthesis of this area is not effective. The area, which has not been seen in either of the existing viewpoints may be able to be filled in by combining with the existing diffusion generative approach. This is the next direction we plan to investigate. |distance|method|PSNR|WS-PSNR|SSIM|LPIPS| |:----:|:----:|:----:|:----:|:----:|:----:| |0.25m|NeuRay|20.66|20.05|0.714|0.409| |0.25m|PanoGRF|**21.98**|**21.30**|**0.763**|**0.348**| |0.5m|NeuRay|20.39|19.94|0.711|0.420| |0.5m|PanoGRF|**21.99**|**21.42**|**0.769**|**0.349**| Best, Authors --- Rebuttal Comment 1.1: Title: Would you give us a response? Comment: Dear Reviewer kb7G: We are sorry to bother you. We sincerely thank you for the review and comments. We have provided corresponding responses and results, which we believe have covered your concerns. As the author-reviewer discussion is approaching the deadline, we hope to receive your response before the deadline. If you have any other questions, we are willing to discuss them with you at any time. Sincerely, Authors --- Rebuttal Comment 1.2: Comment: I gratefully acknowledge the author's comprehensive response, which addressed several of my concerns regarding the comparison and performance beyond the camera baseline. After a thorough examination of the feedback from other reviewers, I keep my original rating that the manuscript lacks the requisite novelty for a NeurIPS publication. The predominant components of the network and feature design derive from prior work. The principal contribution of this manuscript lies in adapting these features/networks from perspective projection to spherical projections, a sentiment echoed by reviewer 74Ux. While the authors undoubtedly demonstrate a commendable effort, enhancing the efficacy of preceding networks on panoramas, the degree of novelty presented remains circumscribed and falls short of the standards set for NeurIPS publication. --- Reply to Comment 1.2.1: Comment: Dear Reviewer kb7G, Thank you for your additional feedback. We respect your viewpoint, yet we believe there might be some misunderstandings or underestimations regarding the novelty and significance of our work. To address your concerns: > The predominant components of the network and feature design derive from prior work. The principal contribution of this manuscript lies in adapting these features/networks from perspective projection to spherical projections. 1. **PanoGRF's Novelty and Significance** - **Full Field of View with Spherical Representation:** We'd like to emphasize the advantage of using a spherical representation over cubemaps. As elaborated in Sec. 4.3, our comparison experiments affirm that the spherical representation surpasses cubemaps in effectiveness (please refer to the comparison with IBRNet* and NeuRay*). Common perspective view synthesis methods, when used, often face challenges during feature aggregation. This is due to the limited field of view causing projection errors. PanoGRF strategically uses the spherical representation to harness the entire field-of-view of panoramic images. This unique methodology offers enhanced feature aggregation and depth estimation, bypassing the limitations of cubemaps. - **Importance of Robust Generalizable Models:** As also observed by Reviewer WLFo, there is a pressing need for models that can withstand overfitting in wide-baseline settings. Our supplementary material in Section E demonstrates that techniques such as Dense Depth Priors [c], when employed in per-scene optimization, tend to overfit. Addressing this challenge, our research emphasizes the development of a robust, generalizable model suitable for these settings. - **Innovative Solution for Occlusion Issues:** Addressing occlusion in 360-degree MVSNet has been a significant challenge in the field. Our proposal of using 360-degree monocular depth to guide the spherical depth sampling introduces an innovative solution to this problem. Ablation studies, as referenced in Sec. 4.4, reveal that combining 360° MVSNet with 360° monocular Depth optimally leverages their strengths, producing superior synthesis results. 2. **Beyond Simple Adaptation** - Our methodology is not a mere adaptation from perspective projection to spherical projections. It's essential to note that a naive 360-degree MVSNet remains insufficient in handling occlusion issues, especially in wide-baseline scenarios. PanoGRF's design is integrative—it brings together 360-degree monocular depth and 360-degree MVSNet, bolstered with single-view priors. Plus, by employing spherical representation, PanoGRF maintains the integrity of panoramic images, offering an improved continuity and comprehensive view than traditional methods. To conclude, we firmly believe that PanoGRF offers novel methodologies and insights into the realm of wide-baseline spherical view synthesis. We trust this response sheds light on our perspective and hope it addresses your concerns adequately. We are grateful for your thorough evaluation and welcome any further feedback. Sincerely, Authors
Summary: This paper introduces PanoGRF, a method for generalizable novel view synthesis of sparse panorama images with wide baselines. This PanoGRF is basically built upon the perspective view synthesis method, NeuRay. Previous generalizable NeRF methods are mainly designed for perspective images are may introduce extra errors between the projection of perspective views and panorama views. To perform generalizable view synthesis given multiple panoramas, the authors 1) design CNN modules on panoramas to extract the geometry and appearance fields, constructing the cost volumes for density and color estimation; 2) design a 360-panorama depth estimation method based on both multi-view image feature correlations and cues of monocularly estimated depth maps. The predicted depth maps are for the upcoming geometry feature extraction. Strengths: - The paper is well written and easy to understand. - Lots of supplementary materials are given to help the readers fully appreciate the experimental results and algorithm designs. - The proposed method demonstrates state-of-the-art view synthesis quality on generalizable view synthesis task from panorama images. Weaknesses: - The overall idea for generalizable view synthesis is not new and is basically built upon the existing NeuRay work, therefore it is a bit incremental for the research community. - The qualitative results are limited in indoor scenes as discussed in the limitations. It would be better if PanoGRF can be trained and evaluated on outdoor unbounded scenes. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The paper is written very clearly, and I find no significant questions to ask. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitations are adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review and comments. We are pleased to receive positive feedback on our performance and writing. We hope the following responses will address your concerns effectively. * For Weakness 1: >The overall idea for generalizable view synthesis is not new and is basically built upon the existing NeuRay work, therefore it is a bit incremental for the research community. We think it is actually not incremental. Please refer to the detailed contribution clarification in our global response. Novel view synthesis from wide baseline panoramas is a very challenging task. As we mentioned in the "global" rebuttal, both NeRF and perspective generalization methods cannot achieve good results under such settings. It is non-trivial to directly apply perspective methods to wide-baseline panoramas because of different 3D representation and severe occlusion problem. We observed their limitations and proposed crucial solutions: we no longer rely on ordinary perspective inputs and planar depth (z-depth), but directly input panoramic images and spherical depth. We alignment appearance feature and geometry feature in the spherical representation. To provide a more robust geometry feature, we introduce 360&deg; monocular depth into 360&deg; MVSNet further increasing the upper limit of PanoGRF. PanoGRF is inspired by these perspective generalization methods but it is not incremental. The comprehensive experiments validate that our solutions are vital for wide-baseline panoramas. More importantly, no similar work has been done on panoramic images. Novel view synthesis from wide-baseline panoramas is an extremely challenging task. As mentioned in our global" response, both NeRF and perspective generalization methods struggle to achieve good results under such settings. Directly applying perspective methods to wide-baseline panoramas is non-trivial due to differences in 3D representation and the severe occlusion problem. We observed their limitations and proposed crucial solutions: instead of relying on ordinary perspective inputs and planar depth (z-depth), we directly input panoramic images and spherical depth. We align appearance and geometry features in the spherical representation. To provide a more robust geometry feature, we introduce 360&deg; monocular depth into 360&deg; MVSNet, further increasing the upper limit of PanoGRF. Although PanoGRF is inspired by perspective generalization methods, it is not incremental. Our comprehensive experiments validate that our solutions are vital for wide-baseline panoramas. More importantly, no similar work has been done on panoramic images. * For Weakness2: >The qualitative results are limited in indoor scenes as discussed in the limitations. It would be better if PanoGRF can be trained and evaluated on outdoor unbounded scenes. Indeed, the current absence of large-scale outdoor panoramic datasets prevents PanoGRF from being trained on outdoor scenes. Furthermore, the significant depth scale differences between indoor and outdoor settings limit PanoGRF's generalization capabilities for outdoor scenes, which also applies to other generalizable methods for perspective images. But if large-scale outdoor panoramic dataset is available, PanoGRF can also be trained and evaluated on outdoor scenes. We consider addressing this issue as part of our future work. Sincerely, Authors --- Rebuttal Comment 1.1: Title: Would you give us a response? Comment: Dear Reviewer un5g: We are sorry to bother you. We sincerely thank you for the review and comments. We have provided corresponding responses and results, which we believe have covered your concerns. We noticed that half of the designated author-reviewer discussion period has elapsed. We hope to receive your response before the deadline. If you have any other questions, we are willing to discuss them with you at any time. Sincerely, Authors
Rebuttal 1: Rebuttal: Dear Reviewers: Thank you for acknowledging the strong performance of this work. As Reviewer un5g said, quantitatively PanoGRF shows a large improvement in rendering quality and this is also evident from the example images in the figures and the videos in the supplementary. We clarify the contributions of our paper as follows to highlight our unique insights and demonstrate that our method is not just a simple composition of exsiting techniques. The novel view synthesis task for wide-baseline panoramic images is highly challenging. Firstly, the sparsity of input views makes it difficult for NeRF to learn the correct geometry. Secondly, certain areas of the scene may be visible from only one view and occluded in others, making it difficult to provide accurate geometry using 360&deg; multi-view stereo alone. To tackle the first challenge, we develop generalizable spherical radiance fields that incorporate 360&deg; data priors into the spherical NeRF, preventing the overfitting problem. For the second challenge, we introduce 360&deg; monocular depth to guide the spherical depth sampling of 360&deg; MVSNet. * Perspective generalization methods (such as NeuRay) inherently has some drawbacks as they only can be applied to perspective slicings obtained from panoramas. PanoGRF is dedicated for panoramic inputs based on our key observations. * __feature aggregation__: We observed that when employing common perspective view synthesis methods, these approaches encounter a significant issue during feature aggregation: due to the limited field of view of the perspective view, a 3D sample point is often projected outside the perspective view or behind the camera (z-depth<0). Aggregating features in this manner is incorrect. Recognizing the limitations of perspective generalization methods led us to the idea of using spherical projection for feature alignment, maintaining a spherical image representation, and fully leveraging the omnidirectional field of view provided by panoramas. We have demonstrated the importance of this approach through comparative experiments in Sec.4.3 of the main text, Sec.E and Sec.F of the supplementary materials. * __depth estimation__: NeuRay relies on planar depth (z-depth) while PanoGRF relies on spherical depth. It is more robust to use spherical depth predicted by 360&deg; MVSNet than planar depth predicted by ordinary MVSNet. The process of the multi-view feature matching process in perspective images suffers from a similar feature alignment problem as mentioned earlier. Moreover, splitting panoramas into cubemaps to compute z-depth disrupts the neighborhood relationship between cubemaps, which can affect accuracy. Besides, we observed that the occlusion issue is quite severe under the wide-baseline setting. Existing 360&deg; methods [1, 2, 3, 4, 5] cannot handle wide-baseline 360&deg; depth estimation in general cases. Methods [1, 2, 3] overlooked the occlusion problem under wide baselines. Methods [4, 5] considered occlusion but utilized a special camera rig to capture wide field-of-view images, making them unsuitable for wide-baseline 360° depth estimation tasks with general 360° cameras. We propose a feasible and effective solution to address this issue, which we introduce later. We believe that our observations can contribute to and benefit related research within the community. * Relying solely on 360&deg; multi-view stereo is insufficient to handle the occlusion issue. To address this problem, we employ a 360&deg; monocular depth network to guide the depth sampling of 360&deg; MVSNet during sphere sweeps and contribute to the visibility prediction of the generalizable renderer. We conducted ablation studies with various camera baselines to verify that 360&deg; MVSNet and 360&deg; Monocular Depth, despite their individual shortcomings, can complement each other and play a crucial role in the task of 360&deg; novel view synthesis under wide baselines. In summary, we propose easy-to-implement and highly effective solutions to the challenges we encountered, and have conducted comprehensive comparisons and ablation studies to demonstrate the effectiveness of our method. Our method significantly outperforms baseline methods (including NeuRay) on several panorama datasets. Considering that Reviewer zV2U, un5g and WLFo have all agreed that our paper has great merits, we believe our research findings are worth sharing with the community. Best, Authors [1] Wang, N. H., Solarte, B., Tsai, Y. H., Chiu, W. C., & Sun, M. (2020, May). 360sd-net: 360 stereo depth estimation with learnable cost volume. In 2020 IEEE International Conference on Robotics and Automation (ICRA) (pp. 582-588). IEEE. [2] Xie, S., Wang, D., & Liu, Y. H. (2023). OmniVidar: Omnidirectional Depth Estimation From Multi-Fisheye Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 21529-21538). [3] Chiu, C. Y., Wu, Y. T., Shen, I., & Chuang, Y. Y. (2023). 360MVSNet: Deep Multi-View Stereo Network With 360deg Images for Indoor Scene Reconstruction. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 3057-3066). [4] Won, C., Ryu, J., & Lim, J. (2019). Omnimvs: End-to-end learning for omnidirectional stereo matching. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 8987-8996). [5] Won, C., Ryu, J., & Lim, J. (2020). End-to-end learning for omnidirectional stereo matching with uncertainty prior. IEEE transactions on pattern analysis and machine intelligence, 43(11), 3850-3862. Pdf: /pdf/4b77d2a2257764735f3ea56ed4581221e9905c2b.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper presents PanoGRF, Generalizable Spherical Radiance Fields for Wide-baseline Panoramas, which introduces mono-guided 360 ◦ depth estimation and leverages each panoramic view based on spherical projection. The experiments show that the proposed method significantly outperforms state-of-the-art generalizable view synthesis methods for wide-baseline panoramas (e.g., OmniSyn) and perspective images (e.g., IBRNet, NeuRay). Strengths: ## Pros 1. Novelty: It's interesting to directly aggregate geometry and appearance features of 3D sample points from each panoramic view based on spherical projection. It's novel to leverage mono-guided 360 ◦ depth estimation to improve the geometry features. Although it can be argued that it's just the combination of Neuray + mono-mvs + spherical radiance fields. 2. performance: The results are remarkable compared with the sota methods. 3. writing: it's well-written and easy to follow. Weaknesses: Cons: I do not find some major concerns. The insight is easy to follow and the experiments are adequate. But I have some subjections as follows: 1. it seems to use false citation: S-NeRF[17]. 2. It may lack some ablation studies: w/o aggregation geometry/appearance feature Technical Quality: 3 good Clarity: 3 good Questions for Authors: please see the weakness. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer zV2U: We sincerely appreciate your review and comments. We are delighted to receive your positive feedback and score. We hope the following responses will address your concerns effectively. 1. Weakness 1: > it seems to use false citation: S-NeRF[17]. We apologize for causing your misunderstanding. In fact, S-NeRF represents spherical variant of NeRF which casts rays by spherical projection to adapt to panoramic inputs as described in Sec.3.1 of our paper. So we cite the original paper of NeRF. To avoid confusion, we will change the description from "S-NeRF [17]" to "S-NeRF (spherical variant of NeRF [17])" in the revision of our paper. Thanks for pointing out this issue. 2. Weakness 2: > It may lack some ablation studies: w/o aggregation geometry/appearance feature. Thank you for your suggestion. We conducted ablation studies on Matterport3D, and the results are shown in the table below. In the "w/o appearance feature" ablation study, we replaced the appearance feature vector with a zero vector to disable the appearance feature while keeping other modules unchanged. We found that the model without appearance features loses its ability to infer the color of novel views entirely, as the generalizable renderer heavily relies on appearance cues from input views. In the "w/o geometry feature" ablation study, we replaced the geometry feature vector with a zero vector to disable the geometry feature while keeping other modules unchanged. We observed that although the model can still infer normal results, its performance is significantly worse than the original (full) model. Due to the length constraints of the main text, we will include these ablation studies in the supplementary material. ||PSNR|WS-PSNR|SSIM|LPIPS| |:----:|:----:|:----:|:----:|:----:| |w/o appearance|5.44|5.24|0.001|0.707| |w/o geometry|26.25|25.25|0.839|0.263| |full|28.10|27.12|0.876|0.195| Best, Authors --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer zV2U Comment: After reading the rebuttal and all reviews, although I'm positive about this paper, I still recommend the reviewer consider the comment from the Reviewer zV2U, meanwhile, I give a potential solution comment about how to solve the concern, please see them and try to solve it. If the experiment results perform well, I would recommend accepting this paper. --- Reply to Comment 1.1.1: Comment: Dear Reviewer zV2U: We appreciate your suggestions and positive comments. We sincerely hope our response can cover your concerns. >A potential naive experiment may be using the related method with the sparse mono input, and then transforming the result to the panorama. If the proposed method PanoGRF outperforms the naive solution, it will demonstrate the insight of the proposed method well. **We have already included the comparisons with existing methods using sparse monocular input and then rendering the result into panoramas in the main paper and the supplementary.** PanoGRF outperforms the existing methods mentioned due to its spherical representation and incorporation of 360-degree monocular depth. - MVSNeRF[a] is not as effective as NeuRay[f], as demonstrated in the comparative experiments of NeuRay. NeuRay* in Sec.4.3 of the main text of our paper takes cubemaps (i.e., sparse monocular perspective images) and corresponding planar depths as input. We evaluated the panoramic results of NeuRay*. Extensive experiments in Sec.4.3 have shown that PanoGRF is superior to NeuRay* due to the spherical representation. The inherent limitations of cubemap projections (perspective projections), such as diminished continuity and restricted field of view, affect both feature aggregation and depth estimation. - All the mentioned methods except MVSNeRF are per-scene optimized. Under wide-baseline settings, per-scene optimization methods with depth priors (e.g., Dense Depth Priors [c]) lead to overfitting, as demonstrated in Sec. E of the supplementary material and highlighted by Reviewer `WLFo`. It's more robust to train a generalizable renderer under the wide-baseline setting. The direct comparisons with Dense Depth Priors using cubemaps inputs, should be consistent with that of supplementary materials, and we will update the experimental results if time allows. Thanks again for your response and positive feedback. Sincerely, Authors [a] Chen et al., MVSNeRF: Fast Generalizable Radiance Field Reconstruction from Multi-View Stereo, ICCV 2021. [b] Chen et al., GeoAug: Data Augmentation for Few-Shot NeRF with Geometry Constrains, ECCV 2022. [c] Roessle et al., Dense Depth Priors for Neural Radiance Fields from Sparse Input Views, CVPR 2022. [d] Deng et al., Depth-supervised NeRF: Fewer Views and Faster Training for Free, CVPR 2022. [e] Yu et al., MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface Reconstruction, NeurIPS 2022. [f] Liu, Yuan, et al. "Neural rays for occlusion-aware image-based rendering." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
null
null
null
null
null
null
ResShift: Efficient Diffusion Model for Image Super-resolution by Residual Shifting
Accept (spotlight)
Summary: This paper constructs a Markov chain that transfers between high-resolution and low-resolution by shifting the residual between them. They achieve competitive performance to SOTA methods with only 20 sampling steps. Strengths: 1. The paper structure is clear and the proposed method is well-motivated. 2. It only requires small iterating steps to achieve competitive performances. Weaknesses: 1. A brief figure to introduce Resshift should be provided. 2. Why this method is called ResShift? The motivation and function should be discussed. Moreover, the ablation study of the res-shift should be provided. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Since there are many math equations, it may be difficult for other researchers to reproduce the method. Would the implementation code be released? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1. A brief figure to introduce ResShift should be provided.** Thanks for your suggestion. We have provided an overview of our model in Fig 3. (a) in the attached rebuttal document, and will add it to our paper in the revised version. > **Q2. Why this method is called ResShift? The motivation and function should be discussed.** The motivation of this work is derived from an intuitive observation, which posits that the transition from an high-resolution (HR) image to its low-resolution (LR) counterpart is more efficient compared to that from the HR image to a Gaussian noise. This assertion stems from the fact that the residual between the HR-LR image pairs are often small. Therefore, this study develops a novel diffusion model that builds up a Markov chain between the HR-LR images by gradually **Shift**ing the **Res**idual of them. We thus refer to the proposed diffusion model as ResShift, encapsulating the essence of this methodology. > **Q3. Moreover, the ablation study of the res-shift should be provided.** The whole formulation of our proposed model is methodically crafted based on the strategy of residual shifting. It is thus impossible to make an ablation study on the res-shift. However, we have provided sufficient ablation analysis on the other configurations of our model in Sec. 4.2. > **Q4. Would the implementation code be released?** We promise to release the source code of this work after revision. --- Rebuttal Comment 1.1: Comment: I would like to keep my rating as "Weak Accept"
Summary: This paper studies diffusion-based image super-resolution (SR) with the goal of reducing the number of diffusion steps. The key intuition is to only learn the residual between an LR-HR image pair, thereby shortening the diffusion path. To this end, the paper introduces ResShift, a novel diffusion framework where the mean of the prior distribution is the LR image. It further provides a derivation of the forward and reverse processes, identifies an effective parameterization of the denoiser network, and analyzes the fidelity-realism tradeoff induced by the noise schedule. Extensive quantitative and qualitative experiments are conducted to verify the effectiveness and efficiency of the proposed method. In particular, ResShift achieves competitive results on the 4x SR task at a low cost of 20 diffusion steps, outperforming the strong baseline of LDM under the same sampling budget. Strengths: - The paper makes the key observation that the residual between an LR-HR image pair is often small. It builds a novel diffusion framework around this insight to reduce the number of sampling steps for improved efficiency. Incorporating domain knowledge into the design of a diffusion model is a key strength of the paper. - Within the proposed diffusion framework, the paper studies the fidelity-realism tradeoff induced by noise schedules through detailed theoretical analysis and ablation experiments. - The proposed method is evaluated on both synthetic and real datasets. It demonstrates comparable or sometimes better results with respect to strong baselines. In particular, it consistently outperforms LDM under the same sampling budget (20 or 40 steps). - Overall, the paper is well-written, and the flow of presentation is easy to follow. Weaknesses: - My main concern is that the comparison with LDM is unfair. In the main experiments, the number of diffusion steps is limited to 20 or 40. This makes sense, as the goal is to demonstrate that ResShift yields stronger results under a tight sampling budget. However, LDM is put at a significant disadvantage in this comparison, because its default DDIM sampler is not well-suited for short sampling paths. In this regime, more powerful samplers exist (e.g., Heun and DPM solvers) and, according to the reviewer's own experience, will likely produce far better results within 20 or 40 NFEs. This raises an important question: is the proposed diffusion model tailored for SR truly superior to the vanilla model in terms of efficiency, when the latter is paired with a fast solver? This is key to assessing the paper's contribution, yet is not addressed by the current set of experiments. - DDRM is another diffusion-based model for SR in addition to LDM. The authors are encouraged to compare ResShift against DDRM as well. - The qualitative results on real data seem inconsistent across different images. In Figure 4, ResShift produces sharper and more detailed output than the baselines, whereas it is more prone to artifacts as shown in Figure 1 in the supplement. I am wondering what causes this discrepancy. Are the images from different real datasets, or is there just a lot of randomness? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the section above for questions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper discusses limitations of the method. The authors are encouraged to provide a short discussion on the potential negative societal impact of their work, such as hallucination of inappropriate image content. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1. The comparison with LDM is unfair. More advanced samplers should be considered.** As suggest, we conduct a comparison to LDM with 20 sampling steps accelerated by more advanced samplers, including PNDM (ICLR 2022) and DPM (NeurIPS 2022). The quantitative comparison results on the testing dataset of ImageNet-Test are presented as follows: | Methods | Sampler |PSNR$\uparrow$ | SSIM$\uparrow$ | LPIPS$\downarrow$ | CLIPIQA$\uparrow$ | MUSIQ$\uparrow$ | | :----: | :----: | :----: | :----: |:----: | :----: | :----: | | LDM-20 | DDIM | 24.76 | 0.639 |0.284 | 0.630 | 48.355 | | LDM-20 | PNDM | 22.87 | 0.549 |0.285 | 0.738 | 55.820 | | LDM-20 | DPM | 22.28 | 0.526 |0.302 | 0.728 | 54.123 | | F-ResShift-20 | - |23.72 | 0.615 |0.246 | 0.773 | 57.408 | It can be easily seen that the incorporation of these advanced sampling techniques brings up pronounced enhancement in perceptual quality, as assessed by CLIPIQA and MUSIQ. However, this performance augmentation comes at a substantial deterioration on fidelity, quantified by PSNR and SSIM. Even in light of this, our proposed method still consistently exhibits evident superiorities across all five metrics. Therefore, the investigations undertaken in this study assume a pivotal and noteworthy significance. > **Q2. Performance comparison with DDRM.** DDRM is a non-blind diffusion-base restoration method, and cannot handle the complicated real-world image super-resolution with unknown degradation types. Thus, we omit the comparison with DDRM in the submitted manuscript. To more comprehensively evaluate the proposed method, we conduct a fair comparison with DDRM on the task of x4 bicubic super-resolution. Please see Q1 of the global response. > **Q3. Artifacts in Fig. 1 of the supplement. Is there just a lot of randomness?** We want to elucidate that the artifacts observed in rare instances, e.g., the second example in Fig. 1 of the supplement, can be predominantly attributed to the inadequate training. Our model in the submitted version undergoes a training over 500k iterations with a batch size of 64. We empirically find that the artifacts can be substantially mitigated by prolonging the training process to 800k iterations. This extended model is denoted as F-Reshift+, and the revised results are presented in Fig. 4 of the accompanying rebuttal file. Notably, it is necessary to note that LDM is trained with 2560k iteration in a batch size of 256, incurring significantly greater computational resources than our proposed approach. Besides, the restored results by F-ResShift+ under multiple different random seeds are also shown in Fig. 4 (h)-(k). We can see that the randomness among multiple solutions is exceedingly slight, thus deemed acceptable in the task of image super-resolution. --- Rebuttal 2: Title: Updated rating Comment: The rebuttal addressed my concerns. I thus raised my rating from borderline reject to weak accept. The authors are encouraged to include quantitative and qualitative results of LDM with different solvers in their revision. --- Rebuttal Comment 2.1: Comment: We're happy that the rebuttal have addressed your concerns. As suggested, we will include the quantitative and qualitative results of LDM with various solvers in our revised version. Thanks for your feedback.
Summary: The paper presents a new image super-resolution diffusion model called "resshift", which aims to address the efficiency issue in diffusion models. Existing acceleration strategies often yield over-smooth results. To combat this, the authors propose a new diffusion model for super-resolution that can produce favorable results in just 20 sampling steps. This method is based on operating on residuals and the paper also proposes a noise-adding strategy to aid this process. The main experiments are performed on real super-resolution settings, imitating the prior setup of Real ESRGAN, and includes testing on real images. Strengths: The experimental results appear quite promising. Specifically, the outcomes produced with 20 steps seem slightly better than existing methods. This shows that the proposed method can efficiently achieve super-resolution with fewer sampling steps, and the introduced noise-adding strategy appears to be a beneficial supplement to the overall approach. Weaknesses: The major issue with this paper lies in its presentation. The central idea of the paper is not clearly articulated. In the second section, although the method is described meticulously, it does not seem significantly different from existing methods. The authors do not clearly highlight what distinguishes their approach. It is recommended that the authors construct a table to illustrate why the "shift" and "residual" ideas are useful, and provide a conceptual narrative at the beginning of the second chapter to give readers an overall impression before diving into mathematical expressions. Secondly, Figure 2 could potentially be misleading. It should be the image obtained by adding noise in the VQ-VAE's latent space, but this is not just noise because a conversion to RGB image is performed. This needs further clarification since it's difficult to discern that Gaussian noise is added without any explanation. Lastly, the method's versatility needs to be elaborated on. The method does not seem to be confined to super-resolution and appears to be applicable to all image restoration models. The question arises whether it could be applied to other image processing tasks. There also needs to be more comparative analysis, as many recent acceleration methods, including CM models, have not been included in the comparison. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: see Weakness Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 1 poor Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1. The major issue with this paper lies in its presentation. The central idea of the paper is not clearly articulated. In the second section, although the method is described meticulously, it does not seem significantly different from existing methods. The authors do not clearly highlight what distinguishes their approach.** The motivation of this work is derived from an intuitive observation, which posits that the transition from an high-resolution (HR) image to its low-resolution (LR) counterpart displays enhanced efficiency, characterized by a reduced number of diffusion steps, in comparison to the transition from the HR image to a Gaussian noise. This assertion stems from the fact that the residual between the HR-LR image pairs are often small. Such a diffusion process can be implemented by gradually shifting this residual, and one corresponding visualization of the hidden states during transition is shown in Fig. 3(a) of the affiliated rebuttal file. The experimental results also well substantiate the rationality of this motivation. Besides, we are very happy on that our motivation and the presentation are highly recognized and appreciated by all the other Reviewers. Furthermore, we want to kindly clarify the contribution of this work so as to distinguish from existing methods. Based on the aforementioned motivation, we innovate by designing a diffusion model from the mathematical formulation of the forward and reverse processes to the optimization strategy. The devised model enables a seamless transition between the HR-LR image pairs, being different from extant approaches that transition from HR image to Gaussian noise. Beyond the scope of this theoretical model innovation, a more flexible noise schedule is also proposed to better control the perception-distortion trade-off. Collectively, these contributions serve to establish clear differentiation between our approach and previous research. > **Q2. Figure 2 could potentially be misleading. It should be the image obtained by adding noise in the VQ-VAE's latent space, but this is not just noise because a conversion to RGB image is performed. This needs further clarification since it's difficult to discern that Gaussian noise is added without any explanation.** We sincerely appreciate this constructive suggestion. We want to firstly clarify that our proposed method provides a general framework to establish a transition between any two variables, i.e., the HR image $x_0$ and LR image $y_0$ in the context of image super-resolution. Given a pre-trained VQGAN with encoder $E$ and decode $D$, it is naturally to extend our method to model the transition between $z_0$ and $v_0$ in latent space, where $z_0=E(x_0 )$, $v_0=E(y_0)$. It should be noted that when modelling in latent space, the encoding and decoding procedures only occur once in the initial and terminal states during forward and reverse process. For example, in the reverse process, we only need to estimate $z_0$ from $v_0$ that encoded from $y_0$ using the trained diffusion model, and then obtain the restored $x_0$ via $x=D(z_0)$. Thus, it is unnecessary to concern the conversion between the latent space and the RGB space for the introduced Gaussian noise in our model. Next, we give more details about the visualization in Fig. 2. Specifically, we first mapped $x_0$ and $y_0$ to $z_0$ and $v_0$ via the encoder $E$, and then generate the intermediate states $\lbrace z_t \rbrace_{t=1}^T$ using the transition kernel in Eq. (1) in latent space, and finally convert them to the RGB space through $x_t=D(z_t)$. In fact, Figure 2 in our paper visualizes the decoded image $x_t$. We will add this explanation in our revised version for easy understanding. > **Q3. Whether the proposed method could be applied to other image processing tasks.** Yes. The proposed method is a general methodology to address a wide spectrum of restoration tasks. To verify such a versatility of our method, we further evaluate its effectiveness on the task of blind face restoration. A subset of 3000 images are randomly selected from CelebA as our testing dataset following CodeFormer (NeurIPS 2022). The quantitative comparisons to GFPGAN (CVPR 2021), VQFR (ECCV 2022), and CodeFormer are listed in Table 3 of the associated rebuttal document. Furthermore, the qualitative comparison on one real-world example is also shown in Fig. 1. We can easily see that ResShift achieves the best or at least comparable performance to recent SotA method CodeFormer. This compellingly suggests the potential for the seamless extension of our method to blind face restoration.
Summary: The authors propose a new diffusion model, based on the principle of residual shifting, that is able to converge to a good looking image in a low number of diffusion steps, improving in terms of high resolution image inference time at least over the Latent Diffusion Model (LDM) SR variant, another well-established diffusion based solution for image upscaling. The authors provide extensive ablations (both is the main manuscript and the supplementary material) showing that the model is able to achieve a significant level of performance, at least in terms of subjective results analysis. The trade-off between the number of diffusion steps and the achieved "image quality" (subjective) is well explored. However, some concerns and comments will be found in the appropriate section of the review. The metrics chosen for performance quantification (PSNR, SSIM, LPIPS, CLIPIQA, MUSIQ) show mixed results, with the model characterized by SOTA performance in terms of non-reference image metrics, with strong indicators in terms of CLIPQA and MUSIQ. In terms of LPIPS, the model also shows an advantage over the compared methods. In terms of PSNR and SSIM, the model shows its limitations, however explainable, given the nature of the algorithm and the followed framework. Strengths: 1) The paper is generally well written and easy to follow. The provided details are enough to offer a good overview of the proposed method. 2) The authors provide ablations showcasing the influence of multiple parameters characterizing the proposed method. 3) The quality of the results selected for visual comparison matches the performance level in terms of quality assessment related metrics. 4) An advantage over the LDM in terms of inference time/diffusion steps needed to achieve a similar image quality assessment performance can be easily derived from the Table 2 of the main manuscript. The quality of the results provided for visual comparison and the perspective for more efficient diffusion models explain my rating. Weaknesses: 1) The efficiency claim made in the title is not clearly supported by the provided evaluations. Even if the advantage over LDM in terms of inference time and diffusion steps needed is clear, there is no comparison with the other methods considered for comparison (Table 3 and 4). This work should clarify the question mark over the opportunity of using a diffusion model for Efficient Image Super Resolution, show the model strengths and let the readers evaluate the trade-off between reconstruction fidelity and subjective "image quality". This is why I added this as the first weak point of the authors claim. 2) The choice in presenting the results of their ablative study (Table 1) versus the against-SOTA (Table 3,4) is somehow strange. the configurations in terms of data are different making it impossible to quantify the advantage between their proposed models and different configurations of LDM. Also, a fully supervised Image Super Resolution CNN architecture could be also added, to better support the efficiency claim as performance gain per inference-time millisecond. 3) A certain trend can be observed in the first 5 lines of the Table 1, where the performance of the model in terms of reconstruction fidelity decreases with the number of diffusion steps followed. This can show an overfitting behaviour on some properties of the training set that are not aligned with the properties of the testing split. 3.1) Have the authors focused their efforts on strategies to supervise the training of their model in order to retain as much as possible from the reconstruction fidelity, while achieving significant subjective performance? 4) When setting the numbers of diffusion steps to 20, there seems to be a strong correlation between the parameter p value and the reconstruction fidelity of the model. However, the value for p chosen for the final configuration was 0.8. Was the choice made given the impact in performance in terms of CLIPQA/MUSIQ? Why? 5) A comparison with another referenced method claiming improved inference time for a diffusion-based algorithm (maybe also Reffusion) would be useful for the potential reader, to understand the advantages of the proposed method (reference 48 of the main manuscript). Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) Where is the difference in the number of parameters of the diffusion backbone against the LDM coming from? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1. Efficiency comparison with the other methods in Table 3 and 4.** We have offered more comprehensive comparable analysis on the efficiency as suggested. Please see Q2 of the global response. > **Q2. The choice in presenting the results of their ablative study (Table 1) versus the against-SOTA (Table 3,4) is somehow strange. the configurations in terms of data are different making it impossible to quantify the advantage between their proposed models and different configurations of LDM.** We would like to elucidate that the ablative study in Table 1 and the comparisons with current SotA methods in Table 3 are both conducted on the synthetic testing dataset “ImageNet-Test” (see Sec 4.1 of our manuscript). It is imperative to note that the dataset configuration remains consistent between Table 1 and Table 3. To better evaluate the effectiveness of our method in authentic real-world scenarios, we further made a comparison with existing method on three real-world datasets and presented the results in Table 4. Due to the unavailability of ground truth images in real-world datasets, our scrutiny primarily focused on non-reference metrics, namely CLIPIQA and MUSIQ, as highlighted in Table 4. > **Q3. A certain trend can be observed in the first 5 lines of the Table 1, where the performance of the model in terms of reconstruction fidelity decreases with the number of diffusion steps followed. This can show an overfitting behaviour on some properties of the training set that are not aligned with the properties of the testing split. 3.1) Have the authors focused their efforts on strategies to supervise the training of their model in order to retain as much as possible from the reconstruction fidelity, while achieving significant subjective performance?** We kindly argue that the decline in reconstruction fidelity with an increase in diffusion steps should not be attributed to an overfitting onto the training data. Rather, this is a well-known phenomenon called “Perception-distortion Trade-off” [1] in the field of image restoration. In particular, the augmentation of the generative capability of a restoration model, such as elevating the sampling steps for a diffusion-based method or amplifying the weight of the adversarial loss for a GAN-based method, will result in a deterioration in fidelity preservation, while concurrently enhance the authenticity of restored images. That’s mainly because the restoration model with powerful generation capability tends to hallucinate more high-frequency image structures, thereby deviating from the underlying ground truth. To facilitate a more comprehensive comparison between our method and LDM, we have plotted the perception-distortion curves of them in Fig. 3 (b) of the accompanying rebuttal file. Herein, the perception and distortion are measured by the metrics of CLIPIQA and mean square-error (MSE), respectively. The plot reflects the perception quality and the reconstruction fidelity of the proposed method and LDM across varying numbers of diffusion steps, i.e,, 10, 20, 30, 40, 50, and 100. Significantly, the perception-distortion curve of our ResShift consistently resides beneath that of the LDM, thereby indicating its superior capacity to manage the perception-distortion equilibrium. [1] The Perception-Distortion Tradeoff, CVPR 2018. > **Q4. When setting the numbers of diffusion steps to 20, there seems to be a strong correlation between the parameter $p$ value and the reconstruction fidelity of the model. However, the value for p chosen for the final configuration was 0.3. Was the choice made given the impact in performance in terms of CLIPQA/MUSIQ? Why?** Yes, you’re right. In our final model, the hyper-parameter of $p$ is set as 0.3, hoping to enhance the perception quality measured by LPIPS, CLIPIQA, and MUSIQ. In the real applications of super-resolution, the prevailing preference consistently gravitates towards outcomes characterized by heightened perceptual quality, in contrast to the conventional emphasis on reconstruction fidelity. This principled inclination is grounded in one common observation that the restored image with superior reconstruction fidelity, as measured by the metric of PSNR, often tends to be blurry. Actually, one can adjust the hyper-parameter $p$ freely according to your requirement. > **Q5. Performance comparison with the other reference methods.** Please see the Q1 of the global response. > **Q6. Where is the difference in the number of parameters of the diffusion backbone against the LDM coming from?** Regarding the diffusion backbone, we adopted the codebase of the guided diffusion model (https://github.com/openai/guided-diffusion) and followed its default configurations. The difference in the number of parameters arises from different condition manners on the time embedding within the block of ResNet. In the guided diffusion model, a spatial feature transform layer is employed to modulate the features of ResNet based on the time embedding. In contrast, the time embedding vector is directly added to the features of ResNet in LDM. It should be noted that we have not specifically optimized the backbone to pursue more performance gain, just following the settings in existing work. The superiority of our method primarily comes from the elaborate design on the diffusion model, which serves as the core contribution of this work. --- Rebuttal Comment 1.1: Title: Updated Rating Comment: Since some of the concerns were addressed by the authors in their rebuttal, there is a clear advantage of the method against the compared LDM in terms of performance, while still lagging behind other GAN based approaches. Authors are encouraged to clearly describe their residual shift procedure, to emphasize their contribution, and also the novelty of their method. The evaluations provided in the rebuttal clearly would help the reader to understand the advantages of their method and its potential. Thus, I am considering a "Weak accept" rating for the submission, after the rebuttal. --- Reply to Comment 1.1.1: Title: Response to the Reviewer's feedback Comment: Thanks for your feedback. As suggested, we will clearly describe the residual shift procedure, the contribution and the novelty of this work, and some evaluations in the rebuttal to our revised version. According to the response, you tend to increase the rating to "weak accept". We thus want to kindly note that whether you forget to change the rating in the system? Thanks again!
Rebuttal 1: Rebuttal: Dear AC and reviewers, We sincerely thank all reviewers for their constructive comments. Since Reviewer vFLR, Reviewer RmTK, and Reviewer k6dY all concern the performance comparison with other related methods, Reviewer vFLR and Reviewer RmTK both require to supplement the efficiency comparison to GAN-based methods. Thus, we address these two issues here. > **Q1. Performance comparison with other related methods.** Since our proposed method primarily focus on the real-world image super-resolution, we exclusively compare with recent approaches that share a similar orientation. Regarding IRSDE [48] or its enhanced version Refusion (CVPR Workshop 2023), the publicly accessible models are specifically trained for the bicubic super-resolution or stereo super-resolution tasks, and cannot handle the general real-world image super-resolution. Correspondingly, I2SB [49] is similarly limited to bicubic super-resolution. DDRM (NeurIPS 2022) is a non-blind image restoration method, thus lacking the capability to address the real-world image super-resolution with intricate and unknown degradation. For InDI [47], the source code and the pre-trained model are not publicly available. Therefore, our paper omits the comparison to these methods. Furthermore, we train a model tailored for x4 bicubic super-resolution, thereby facilitating a fair and comparison against IRSDE, I2SB, and DDRM. For I2SB and DDRM, we expedite the inference process to 20 steps using the default sampler. For IRSDE, we retain the sampling steps same with its setting in training, , i.e., 100 steps, because accelerating this process during inference will yield a severe performance drop. To conduct a comparative analysis, 3,000 images were randomly selected from the validation dataset of ImageNet to serve as our testing dataset. The comprehensive quantitative comparison results are listed in Table 1 of the accompanying rebuttal document. We can easily observe that the proposed method outperforms the aforementioned approaches across various evaluation metrics, the number of parameters, and the inference speed. This indicates the efficacy of our meticulously designed diffusion model. > **Q2. Efficiency comparisons to other competing methods.** We provide a comprehensive comparison on the performance and efficiency of our proposed method to LDM and the GAN-based methods in Table 2 of the rebuttal document. For a more holistic assessment, the efficiency evaluations of our method against recent diffusion-based techniques are also presented in Table 1 of the rebuttal document. Combining these comparative results, we obtain the following conclusions: i) In contrast to existing diffusion-based methodologies, our proposed method have achieved significant improvement in both performance and efficiency. ii) While current diffusion-based methods have shown notable superiority due to their powerful generation capability, they still lag behind GAN-based models in terms of efficiency. Notably much faster than the diffusion-based techniques as shown in Table 1, our proposed method yet remains twice as slow as SwinIR, the present SotA GAN-based model. We will explore how to further speed up our method in the future work. Pdf: /pdf/7cb8808b523f32d7ceab1c9dcbd684f76d2444b4.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces a diffusion-based image super-resolution method. The proposed method starts to generate an HR image from a given LR image directly (learns to generate the residual image between LR and HR), not start from a noise. Therefore, the method can generate the output image faster than previous works, specifically, with only 20 sampling steps. In addition, there are some hyperparameters to control the fidelity-perceptual quality trade-off of output images, namely, T, p, and k. Strengths: - The idea of starting from LR image and utilizing a residual image is proper to the SR task and this is different from conditioning the LR image. - This successfully makes the method takes only short steps in the inference using a comparable number of parameters and runtime compared to previous method LDM. - In addition, it is practical for users to control the fidelity-perceptual quality trade-off of the SR model by adjusting the hyper-parameters. - The paper shows the superiority in terms of perceptual quality (LPIPS, CLIPIQA, MUSIQ) in synthetically degraded images and real images as well. Weaknesses: - There is no visual comparison on synthetic SR testsets so it is hard to fully understand the effectiveness of the method, but the paper argues superiority in this aspect. - Several previous methods are listed in Sec 3 (Related Work), however, only LDM is compared in experiments. Are there any difficulties or problems in comparing with other methods? Technical Quality: 3 good Clarity: 3 good Questions for Authors: - In L87, the image value range of [0,1] but I think the residual image is within [-1,1]. Does the following explanation still hold if I am correct? - In L135, how this is achieved? I think it is weird because the method does not learn to generate a noise distribution, and what does this description mean? - It would be better if runtimes/parameters are added for other methods in Table 3 (especially vs GAN-based methods). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - Please see the weaknesses and limitations. - My rating is a reflection of the weaknesses and I look forward to the authors' feedback on the weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1. Visual comparison on the synthetic dataset.** As suggested, we have presented one visual comparison on the synthetic dataset in Fig. 2 of the associated rebuttal file. Evidently, the proposed method outperforms other competing approaches in terms of both fidelity and realism. In our revised version, we will add more qualitative comparison examples to address this concern. > **Q2. Performance comparison with other related methods.** Please see Q1 of the global response. > **Q3. I think the residual image is within [-1,1]. Does the following explanation still hold if I am correct?** Yes. The residual image $e_0$ is within [-1,1], and the following explanation still holds on. The shifting sequence $\lbrace\eta_t\rbrace_{t=1}^{T}$ increases monotonically, indicating $\alpha_t = \eta_t -\eta_{t-1} > 0 $ and $\alpha_t<1$, we thus have $$\text{max}[\alpha_t e_0] < \alpha_t < \sqrt{\alpha_t},$$ where $\text{max}[\cdot]$ is the pixel-wise maximization across all pixels, $e_0 \in R^{h\times w \times c}$, $h$, $w$, and $c$ represent the image height, width, and channels, respectively. > **Q4. Explanation of Line 135.** In our devised model, the transition distribution defined in Eq. (1) gradually shifts the residual, accompanied by a minor perturbation by Gaussian noise, during the forward process. The intensity of the Gaussian noise is controlled by the hyper-parameter $\kappa$. For a sufficiently large value of $\kappa$, e.g., 40, the noise perturbation becomes predominant within the diffusion process, leading to a convergence towards a Gaussian diffusion model. For a discrete Gaussian diffusion model, the progression of the propagation can be equivalently reflected by the signal-to-noise-ratios of the hidden states (SNRs) in each timestep. By setting $\kappa=40$, $p=0.8$, and $T=1000$, our proposed method demonstrates a closely similar SNR curve to LDM as shown in Fig. 2(g) in our paper. Furthermore, Fig. 2 (e) and (f) visualize the intermediate states of the forward process as observed in our method and LDM. Combining the SNR profile and the visual comparison, it becomes apparent that our method degenerates into a diffusion process that is very close to LDM under such a setting. > **Q5. Efficiency comparison to the GAN-based.** Please see Q2 of the global response. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed rebuttal. I checked the visual results and my questions are resolved so I would like to update my rating BR to WA. Please include missing results (regarding Q1, Q2, Q5) and other reviewers' requests as well in the supplementary.
null
null
null
null
null
null
Sparse Parameterization for Epitomic Dataset Distillation
Accept (poster)
Summary: This paper introduces the insight of sparse coding into dataset distillation and proposes a sound method. This work can efficiently generate syn data by adopting the multi-head SCMs as the shared source of the syn images and using a recurrent model to generate the syn patches. It can cooperate with various previous matching methods. In extensive experiments, the proposed method shows auspicious results and superiority. Strengths: + Importing the sparse coding into the syn data is interesting and effective from the experiments. The analysis of the problem of overlooking the syn data sparsity itself is insightful and naturally leads to the proposed method. + Good writing, easy to follow. + Extensive experiments, ablations, and visualizations. + Insightful and valuable discussions and analyses, especially the syn data property, clear formulation, storage analysis, and trade-off between quality and quantity. + Fig III in the appendix clearly showcases the effects of the proposed method. Weaknesses: - More discussions about the relations between dataset distillation, sparse coding, and corset selection. - Though the recurrent model is sound according to the cost discussion. I still wonder about the performance of adopting a heavier model following the same idea of the proposed method. - Fig. 4: should be more self-contained, please explain the comparison and differences/similarities. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. If the image size is larger, what will happen? 2. Some other possible choices of the SCM format and utilization? For future study. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate reviewer 27jp for the insightful and constructive comments and are glad that the reviewer finds our method novel and interesting. In response to the concerns raised, we will address them as follows: 1. **More discussions about the relations between dataset distillation, sparse coding, and coreset selection.** * Thanks for the suggestion. As mentioned in _Lines 303 to 305_ of our paper, previous research in sparse coding has primarily focused on compressing individual images [71, 72], with the primary objective of achieving high compression ratios [34] while minimizing perceptual distortion. In contrast, dataset distillation involves condensing informative information from the original dataset into a smaller synthetic dataset to enhance downstream training without explicitly considering perceptual distortion. However, it is worth noting that many techniques and theories in sparse coding can provide valuable inspiration for developing parameterization methods in dataset distillation. * Coreset selection [a, b, c, d] aims to identify a representative subset of the original dataset that aligns with the objectives of dataset distillation. As shown in [e], coreset selection typically performs better when the storage budget is relatively large, while dataset distillation demonstrates superior performance under extremely limited storage budgets. * We are glad to incorporate more discussions on sparse coding and coreset selection in the paper. Additionally, we have included the results of the coreset methods in the experimental comparison, as presented in _Table A in the submitted PDF of the global response_. 2. **Though the recurrent model is sound according to the cost discussion. I still wonder about the performance of adopting a heavier model following the same idea of the proposed method.** * Thanks for your interest in our model design. In _Table II in the supplementary material_, We compare the recurrent model (the last row of the table) and the heavier non-recurrent model (the penultimate row of the table). Further discussion on this comparison can be found in _Lines 125 to 127 of the supplementary material_. In short, a heavier model leads to a notable escalation in the number of parameters (from 305k to 543k) while yielding only a marginal improvement in accuracy (from 40.0% to 40.3%). This experiment provides evidence supporting the efficiency of our recurrent design. 3. **Fig. 4: should be more self-contained, please explain the comparison and differences/similarities.** * Thanks for the suggestion. As mentioned in _Lines 248 to 253_, the purpose of Figure 4 is to demonstrate that the process of sparsification does not lead to a significant loss of essential information. Consequently, the two images (before and after sparsification) appear highly similar. To enhance clarity, we will update the caption of Figure 4 to include this conclusion. 4. **If the image size is larger, what will happen?** * Thanks for the question. As stated in _Line 68_, our method is designed to efficiently handle high-resolution datasets. However, it is worth noting that previous methods did not specifically perform experiments on larger image sizes, with a maximum size of 128x128. Therefore, conducting a direct comparison with these methods on image sizes of 256x256 is not feasible. * Following your suggestion, we performed experimental investigations on both the distribution matching baseline [17] and our method, using the ImageNette dataset with image sizes of 128x128 and 256x256. In line with the practices of previous methods that handle higher resolution images with deeper networks, we increased the depth of the ConvNet to 6 for the 256x256 image size: ||128 x 128|256 x 256|Gain| |-|-|-|-| |Baseline|28.6 $\pm$ 0.6%|29.5 $\pm$ 1.1%|+0.9%| |Ours|53.5 $\pm$ 1.2%|57.7 $\pm$ 0.9%|+4.2%| |Gain|+24.9%|+28.2%|+3.3%| The results demonstrate that our method achieves more substantial improvements when applied to higher-resolution images. These findings will be thoroughly incorporated into the paper, as they contribute to the expanding body of evidence supporting the effectiveness of our method on high-resolution image datasets. 5. **Some other possible choices of the SCM format and utilization? For future study.** * Thanks for the suggestion. Several other efficient sparse matrix storage formats, such as compressed sparse row (CSR), compressed sparse column (CSC), diagonal (DIA), and block compressed sparse row (BSR), have the potential to save more storage compared to the naive coordinate (COO) format. However, DIA and BSR only achieve significant storage savings when the sparse matrix exhibits certain structural patterns. In the context of our work, the sparsity induced by the $l_1$ penalty is unstructured. As a result, we have opted for the simplest COO format to accommodate this unstructured sparsity. For CSR and CSC formats, we can convert directly from COO to them, which can bring further storage savings when the sparsity is not extremely high. We will consider other sparsity penalties and their corresponding efficient storage formats in our future work. ---- [a] Welling, Max. "Herding dynamical weights to learn." Proceedings of the 26th Annual International Conference on Machine Learning. 2009. [b] Chen, Yutian, Max Welling, and Alex Smola. "Super-samples from kernel herding." arXiv preprint arXiv:1203.3472 (2012). [c] Feldman, Dan, Matthew Faulkner, and Andreas Krause. "Scalable training of mixture models via coresets." Advances in neural information processing systems 24 (2011). [d] Sylvestre-Alvise Rebuffi et al. “icarl: Incremental classifier and representation learning”. Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. 2017. [e] Cui, Justin, et al. "DC-BENCH: Dataset condensation benchmark." Advances in Neural Information Processing Systems, 2022. --- Rebuttal Comment 1.1: Title: Post-rebuttal Comment: Thank the authors for the response. My main concerns are addressed well in the above rebuttal. However, the other reviewers have shared some important concerns which should be discussed, also looking forward to their opinions. Overall, I think this paper proposed an interesting combination of two directions and has good inspirations.
Summary: The paper proposes a new framework(SPEED) to perform dataset distillation. The new framework is composed of 3 parts: 1. Spatial-Agnostic Epitomic Tokens (SAETs) 2. Sparse Coding Matrices (SCMs) 3. A Feature-Recurrent Network (FReeNet) The paper also employees multi-head attention to ensure the diversity of distilled images, thus achieving new SOTA. The new framework SPEED is also claimed to work with multiple dataset matching methods and enhance their performances. Through various experiments, the method shows strong performances on CIFAR-10/100 and TinyImageNet. Similar results are also observed on ImageNet subsets. ### I have read the author's response and my concerns are addressed by seeing more experiment results. Strengths: ## originality SPEED is claimed to be the first paper studying the spatial redundancy in the field of dataset distillation/condensation SPEED applies a few methods such as the concepts from ViT, dictionary learning and sparse coding. ## quality The method works very well on dataset with higher resolution. ## clarity The paper is well written and easy to follow The framework of the method and learned images are visualized which makes it easy to understand. ## significance The proposed methods achieve SOTA performances under most settings. The paper novelly proposes to distill images at image patch level which shows a new way for dataset distillation. The proposed method is compatible with multiple matching objectives. Weaknesses: - Incomplete evaluation results in table 2. IPC 1/10/50 are used for CIFAR-10/100, but only IPC 1 for TinyImageNet is used. Why that is the case? From [1], even the trajectory matching method this paper adopts has reported the results on TinyImageNet with IPC 1/10/50. - In table 1, for the parameterization methods including SPEED, can the author include how many synthetic images are generated for evaluations, e.g. 11 for IPC 1 so that we know if the performance gain is due to increased number of images or increased quality of generated images. [1] Dataset Distillation by Matching Training Trajectories Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See my comments in weakness section. On general, this paper provides some insights on a totally different way of parameterization in dataset distillation. I am willing to raise the score if the above questions are answered. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The paper proposes a new method to apply parameterization in dataset distillation. However, the generalization ability of the method is unknown such as the recurrent neural network. Especially some evaluation results in table 2 are missing which makes it even harder to tell. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank reviewer 7J1t for the pertinent and valuable feedback. We are delighted to learn that the reviewer finds our method achieves good performance across multiple datasets demonstrated in Table 1 and Table 2. The concerns are fully addressed as follows. 1. **Incomplete evaluation results in table 2? Only IPC 1 for TinyImageNet is used. Why that is the case?** * Thanks for the question. It is important to highlight that our results, achieved under the IPC 1 storage budget (ACC: 26.9%), have already surpassed the performance of the baseline method (trajectory matching [16]) under the IPC 10 storage budget (ACC: 23.2%) on TinyImageNet. Moreover, our results even approached the performance of the baseline method under the IPC 50 (ACC: 28.0%) setting. * As highlighted by reviewer yNYq, it is of utmost importance to conduct comprehensive comparisons among different parameterization methods. Previous parameterization studies either lacked reporting results on TinyImageNet [19, 21] or only provided results under the IPC 1 budget [22]. Consequently, experiments conducted under higher IPCs lack meaningful comparisons. Due to the limitations of time and computational resources for submissions, we did not prioritize these experiments accordingly. * As per your suggestion, we are willing to report our experimental results on TinyImageNet under IPC 10/50 as an addition, as shown in the following table: | IPC (#Param) | 1 (12,288) | 10 (122,880) | 50 (614,400) | | -------- | ----- | ----- | ----- | | TM | 8.8 $\pm$ 0.3% | 23.2 $\pm$ 0.2% | 28.0 $\pm$ 0.3% | | IDC | - | - | - | | HaBa | - | - | - | | RTP | 16.0 $\pm$ 0.7% | - | - | | Ours | **26.9 $\pm$ 0.3%** | **28.8 $\pm$ 0.2%** | **30.1 $\pm$ 0.3%** | Our method achieves an accuracy of 28.8% under the IPC 10 budget, showcasing substantial improvements over the baseline (ACC: 23.2%), and even surpassing the baseline under IPC 50 (ACC: 28.0%). Additionally, our method achieves an accuracy of 30.1% under the IPC 50 budget, establishing a state-of-the-art performance. These experimental results have been incorporated into _Table A in the PDF of the global response_, thus making our evaluation more comprehensive. 2. **Can the author include how many synthetic images are generated for evaluations? Is the performance gain due to increased number of images or increased quality of generated images?** * Thanks for your insightful attention to the quality and quantity aspects of dataset parameterization. We also recognize the significance of this topic, and therefore, we have dedicated a single section in our paper to thoroughly discuss it. The experimental results can be found in _Tables 6 to 8_, and the discussion can be found in _Lines 254 to 270_. In conclusion, both quality and quantity play crucial roles in parameterization. Excessively sacrificing one in favor of the other without careful consideration can lead to a noticeable decline in performance. * As suggested, the number of synthetic images on ImageNet Subsets in Table 1 is summarized: | IPC (#Param) | 1 (49,152) | 10 (491,520) | | ---- | ---- |----| | #Synthetic Images | 15 | 111 | Our method synthesizes 15 images under the IPC 1 budget, and remarkably, it achieves performance that is competitive with other methods operating under the IPC 10 budget, while utilizing only 1/10 of the parameters. For instance, on ImageNette, our method achieves an impressive accuracy of 66.9% under the IPC 1 budget, surpassing the previous state-of-the-art result of 66.5% achieved under the IPC 10 budget. These findings demonstrate the efficiency of our method and the high quality of the synthetic images it produces. * To further prove the above claim, we conclude the number of synthetic images on CIFAR100, compared with other parameterization work: | IPC (#Param) | 1 (3,072) | 10 (30,720) | 50 (153,600) | | ---- | -------------- | --------------- | --------------- | | IDC | - | 40 (44.8 $\pm$ 0.2%) | - | | HaBa | 5 (33.4 $\pm$ 0.4%) | 45 (42.5 $\pm$ 0.2%) | 245 (47.0 $\pm$ 0.2%) | | RTP | 16 (34.0 $\pm$ 0.4%) | 232 (42.9 $\pm$ 0.7%) | - | | Ours | 11 (**40.0 $\pm$ 0.4%**) | 62 (**45.9 $\pm$ 0.3%**) | 100 (**49.1 $\pm$ 0.2%**) | As evident, while the number of synthetic images generated by our method is **not the highest** among all approaches, our outstanding performance clearly showcases the **high quality** of the synthetic images. This further emphasizes that our approach enhances performance by improving both the quality and quantity of the synthetic images. Moreover, it demonstrates the highly efficient reduction of spatial redundancy achieved by our method. 3. **The generalization ability of the method is unknown such as the recurrent neural network.** * We appreciate the reviewer's attention to the generalization abilities of our method. We have discussed and proved the generalization abilities of our method in _Section 3.2 (Lines 203 to 232)_, including cross-architecture performance, universality to matching objectives, and robustness to corruption. To gain a comprehensive understanding of the experimental results and delve into an in-depth analysis, we kindly refer the reviewer to _Section 3.2_ of the paper. * For the ablation study of our recurrent blocks, please kindly refer to _Table II in the supplementary material_ and the second question of reviewer 27jp. --- Rebuttal Comment 1.1: Title: thanks for the response Comment: thanks for the authors' response. The results look good to me. Please include these into the paper. I recommend acceptance for this paper after reviewing the results.
Summary: This work proposes a new memory-saving method of dataset distillation by distilling the dataset into a set of Spatial-Agnostic Epitomic Tokens which are indexed by Sparse Coding Matrices and decoded into images by a Feature-Recurrent Network. This method is plug-and-play compatible with existing distillation methods, allowing them to achieve more efficient storage. State-of-the-art results are shown for many datasets and problem settings. Strengths: The overall presentation of the paper is very nice. The figures and equations very clearly explain to the reader the main ideas. The colorfully annotated Eq 7 especially makes the storage budget easy for the reader to digest. The SPEED method itself is quite interesting, and algorithm 1 very clearly explains the process. The many ablation studies and side-experiments further explain the effectiveness of the method. Weaknesses: The biggest issues I have with this paper are the presentation of tables 1 and 2. This is not an issue specific to this paper, but methods that do re-paramaterization as a means of memory saving should not be directly compared to methods that only propose a matching algorithm. It should be made extremely clear that IDC, HaBa, RTP and SPEED are solving an inherently different problem than the baseline methods. Synthetic set size should also not be given in "IPC" but in the number of learnable parameters, since the given IPC simply isn't true anymore. For example, instead of IPC=10, you could have #Params <= 30,720 (10x3x32x32) Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I am happy to raise my score if the above weakness is addressed. I am also just curious about another thing: The high resolution images seem extremely blocky due to the nature of the patches. Have you considered adding 1 or 2 convolutional layers _after_ the patches are stitched back together? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate reviewer yNYq for the insightful suggestions and are happy that the reviewer finds our work interesting and effective. We are glad to address the concerns and take the suggestions as follows: 1. **The presentation of tables 1 and 2. Re-parameterization should not be directly compared to methods that only propose a matching algorithm.** * Thank you for considering the suggestion. We acknowledge that a direct comparison between pure matching methods and parameterization methods can be a matter of controversy, and we agree on the importance of presenting the tables in a manner that eliminates any ambiguity. As per your request, we have reorganized Table 2 into a _Table A in the submitted PDF of the global response_. In Table A, we have made explicit differentiations between pure matching methods and parameterization methods, ensuring a clear separation between these two categories. Furthermore, we have included an additional row to indicate the corresponding parameter amounts for the parameterization methods. We will also make the same revisions to Table 1. 2. **The high resolution images seem extremely blocky. Have you considered adding 1 or 2 convolutional layers after the patches are stitched back together?** * We sincerely appreciate the valuable advice provided. As suggested, we performed multiple experiments on ImageNette under IPC 1 storage budget, adding 1 and 2 convolutional layers with kernel sizes 3 and 5: | Kernel Size | 3x3 | 5x5 | | ------ | --- | --- | | 1 layer | 66.3 $\pm$ 1.8% | 66.4 $\pm$ 1.3% | | 2 layers | 65.9 $\pm$ 1.3% | 64.0 $\pm$ 0.5% | | None | **66.9 $\pm$ 0.7%** | **66.9 $\pm$ 0.7%** | As evident from the results, the incorporation of additional convolutional layers in our experiments did not yield a significant improvement in downstream training. However, it did provide slight relief from the chessboard artifact (blocky artifact), as depicted in _Figure A in the submitted PDF of the global response_. Nonetheless, the impact of the chessboard artifact on downstream training and the exploration of parameter-efficient methods to eliminate these artifact warrant further investigation. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thank you very much for addressing my concerns. As promised, I will raise my score to a 7.
Summary: This paper proposes a new parameterization for dataset distillation. The new parameterization considers image patches, use sparse matrix and recurrent feature net to generate synthetic images. The total parameters follows storage constraint. The experimental results show improvement over previous methods. Strengths: + This paper proposes a new parameterization for synthetic images in Dataset Distillation. + The proposed method works well across various benchmarks, including imagenet, cifar and cross arch generalization + This paper is quite interesting in proceeding the research on Dataset Distillation parameters. Spatial redundancies are quite dominant in standard parameterization, and this method can certainly help with alleviating that. Weaknesses: - The paper claims on reducing spatial redundancies. I wonder how the authors compare their method with [19]. Does that also consider reducing spatial redundancies? - The paper provides better empirical performance, but the research messages are not quite surprising. - Have the authors considered using algorithms similar to [a] for sparsification? [a] Parameter-Efficient Transfer Learning with Diff Pruning Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See weaknesses section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank reviewer kT2g for the valuable comments and feedback. We deeply appreciate the reviewer's acknowledgment of the merits of our proposed method, including its effectiveness in reducing spatial redundancy, its capacity for generalization, and its superior performance compared to previous approaches. We are fully committed to addressing the concerns raised by the reviewer, and we outline our responses below. 1. **How the authors compare their method with [19]. Does that also consider reducing spatial redundancies?** As we discussed in _Line 283_, IDC [19] takes a simple downsampling strategy to save more images, which also partially reduces the spatial redundancy. However, naive downsampling suffers from two major drawbacks: * The uniform downsampling operator employed in IDC indiscriminately discards spatial information, including crucial details that are vital for downstream training. In contrast, our method incorporates end-to-end spatial-agnostic learning to preserve informative areas that are crucial for downstream training, while discarding irrelevant and repetitive areas. * Downsampling **can not** effectively reduce spatial redundancy within the same image across different locations, let alone across different images, as it is **not spatial-agnostic**. However, our method is specifically designed to be spatial-agnostic, allowing us to further mitigate spatial redundancy in such scenarios. 2. **Have the authors considered using algorithms similar to [a] for sparsification?** * We appreciate the introduction to the noteworthy work [a] on transfer learning. This method employs a more sophisticated approach to approximate $l_0$ sparsity by relaxing a binary vector into continuous space. In contrast, our method directly utilizes $l_1$ penalty as a surrogate for $l_0$ penalty. Both techniques are widely recognized and extensively utilized in the machine learning community. It is important to note that the specific method used to approximate $l_0$ sparsity does not affect our overall parameterization framework. We will discuss various sparsification techniques in our related work, including the one presented in [a], and plan to investigate and evaluate their respective impacts in future studies. We appreciate your suggestion. ---- [a] Guo, Demi, Alexander M. Rush, and Yoon Kim. "Parameter-efficient transfer learning with diff pruning." arXiv preprint arXiv:2012.07463 (2020).
Rebuttal 1: Rebuttal: We would like to appreciate all the reviewers for their time and effort in the review process. Overall, we are pleased that the reviewers recognize the novelty (reviewer 27jp, yNYq, 7J1t), impressive experimental results (reviewer kT2g, 7J1t), and clear presentation (reviewer 27jp, yNYq, 7J1t) of this work. Please refer to the attached PDF for tables and figures. Pdf: /pdf/42284d99cf84aa8e34e49dc388d14b21983ddae4.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Graph-Theoretic Framework for Understanding Open-World Semi-Supervised Learning
Accept (spotlight)
Summary: The authors describe a theoretical framework for contrastive learning of representations in an open-world setting, where both labelled data and unlabelled data of potentially new classes is available. They explicitly describe the graph encoding positive sample connections and formulate a contrastive loss that amounts to a spectral decomposition of the adjacency matrix. Using this framework, they prove two theorems about the change in clustering quality of their representations via k-means when incorporating some labels. The main take-away is that labelled data helps to cluster another class if this other class has strong connections to the labelled class via data augmentation and was previously poorly clustered. Their method also outperforms several competitors for open-world and self-supervised representation learning with subsequent clustering. Strengths: I appreciate the theoretical nature of the paper. Understanding the behavior of machine learning models when a clear learning signal via labels is unavailable or just partially available is important for their development and trustworthiness. It is also encouraging that their framework does not only allow a formal investigation but also holds up practically. I find the paper is well written with an appropriate deferral of more technical parts to the appendix. The authors also discuss connections of their work to the more common contrastive SimCLR loss. Their theory seems to be novel for the setting of open-world representation learning. Weaknesses: **Strength of different types of augmentations:** In the illustrative example in section 4.1. it is assumed that the probability $\tau_c$ of augmenting across classes if there is a shared attribute (e.g. augmenting a blue cube to a blue ball) is larger than the probability $\tau_s$ of just changing attributes within a class (blue ball to red ball). I find this quite counterintuitive. First, I am surprised that $\tau_c$ should be non-zero. This might be achievable in the toy setting, but for any real world data, say CIFAR10, it is very unlikely, if not impossible, to augment two images of different classes to the same image. Maybe a plausible change of class can be achieved via cropping (cropping an image of a dog in front of flowers to just the flower) but it still seems unlikely that there should be another image in the dataset that augments to the same view. The same argument applies, to a lesser extend, also to the question if two images of the same class can be augmented to the same view (positivity of $\tau_s$). In the toy setting, I also wonder how positions and sizes were treated. Can a large blue ball in the top left corner only augment to a large blue cube in the top left corner or to any blue cube? Second, I am surprised by the order of augmentation probabilities. Is it not more likely for two data points of the same class to augment to the same view than for two data points of different classes? A similar requirement seems to relevant for Theorem 4.2 (l255 ff). How crucial are these assumptions (or corresponding ones) for the outcome of the illustrative example, the main theory, and the empirical validation? How relevant does this make the theoretical analysis in real settings? **Empirical illustration of the main theorems:** It would strengthen the paper if the authors had included experiments that showed how the predicted quantities in Thm 4.1 and 4.2 behave empirically in real / toy settings. In particular, a graph that shows the $\Delta(\delta)$ terms in both theorems for different $\delta$ values both according to their theory and in terms of actual experiments. This would also make it more evident how restrictive the assumptions of the theorems are and how tight the inequality is. The size and evolution of the three summands in the last equation of Thm 4.2 would also be interesting. **Minor:** - I think that highlighting which feature is more separable in Fig 3b) is a bit problematic. First, UMAP plots are known to artificially tear apart structures. Second, the shape separability in 3b1) is also very good (but admittedly less obvious when looking at the plot as the color of the marker is much more prominent that its shape in the 2D plot). Therefore, I would recommend to rather stress the fact that when incorporating labels, the islands of the same shape but of different color seem to be much closer together (i.e. much less separable by color) than without labels. - It would have helped my understanding of the paper if it was stated more explicitly that label information is made available by providing the label of existing but previously unlabelled points as opposed to adding new labelled points. - I would suggest to also assign the bottom matrix in Fig 2b) as $\eta_l A^{(l)}$ as in the line above and then add another line $A=...$ below. Technical Quality: 3 good Clarity: 3 good Questions for Authors: **Size of the data augmentation graph:** The node set of the graph described in 3.1 is the set of all augmented views of all available data. Why is this even finite? Consider, for instance, image data. Are there not infinitely many ways to color jitter even a single image? Or is the idea to only consider a fixed and finite set of augmentations of each data point that is selected prior to training? This would be different than what is typically done in contrastive learning, where augmentations are applied on the fly. Even when considering a fixed set of augmentations, the size of the graph would have to be many times the size of the original dataset, right? **Empirical Validation:** - Were the same positive samples (including label information) used when comparing to SimCLR? From the discussion in appendix D, I think they were not. If that is the case and the SimCLR experiment is purely self-supervised, why is there such a large difference between the known, novel and all classes? Shouldn't all the completely un-/self-supervised methods be agnostic to which class has labels (which they cannot use anyway)? - Using a ResNet-18 typically leads to about 90% linear probing and kNN accuracy on CIFAR10 for SimCLR. I appreciate that the evaluation is different and k-means is used to infer the clusters here. Nevertheless, I am surprised by the drastic drop in performance. Do the authors have an idea why this happens? Do the same SimCLR representations still achieve about 90% accuracy with linear probing / kNN classifier? - I appreciate that the paper is about the open-world setting, where there might be novel classes without any labels, so that one can only do clustering and not classification for evaluation. Still the label information is available for CIFAR10 and CIFAR100 and I would have found it interesting to see how the proposed method fares under the SimCLR evaluation protocol with a linear probe / kNN classifier. This setting is standard in the self-supervised learning world and thus makes the quality of the proposed method more interpretable to an audience more familiar with self-supervised rather than open-world representation learning. I assume SORL should outperform SimCLR since it has some label information available at train time, right? - Why is the performance for the "known" classes lower than that of the "novel" classes and why is the performance for "all" classes (typically) worse still than for the "novel" classes for the methods above the horizontal line in Table 1? For the open-world methods below the horizontal line, the performance is (typically) best on the "known" class, worse on the "novel" class and somewhere in between for "all" classes. This it what I would have intuitively expected. - The method LSD-C [1] achieves over 80% clustering accuracy on CIFAR-10 without using any labels (and no supervised training of a shallow classifier). It might therefore be a relevant competitor. **Minor:** - The batch size of 512 is rather small for contrastive learning. Why is that? - Are the different islands of points that share both color in shape in Fig 3b meaningful? Do they encode perhaps the size or position? **References:** [1] Rebuffi, Sylvestre-Alvise, et al. "Lsd-c: Linearly separable deep clusters." Proceedings of the IEEE/CVF international conference on computer vision. 2021. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: Limitations were not explicitly discussed. I suggest a more detailed discussion of how their theory and assumptions hold up on real world data, see weaknesses above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful questions! Below we address each of your comments in detail. #### **- Discussion on the augmentation graphs.** We have noted the reviewers' concerns regarding the definition of the augmentation graph from several angles, and we address these concerns as follows: 1. *The feasibility of augmenting two real images to the same view.* We agree that most images cannot be augmented from images with dissimilar content when limiting the augmentation strategies to commonly applied techniques like cropping and color jittering. However, it becomes achievable once we expand the strategy to include the use of generative models. Though it's not necessary to apply generative augmentation in contrastive learning, the research community (Haochen et al., 2021; 2022; Shen et al., 2022) is actively utilizing the concept of the augmentation graph to theoretically explain the practical success of contrastive learning. 2. *The finiteness of the augmentation graph.* Though $N$ can be incredibly large in practice, it remains finite. It is upper-bounded by $256^{3 \times W \times H}$, considering a $W \times H$ RGB image. Despite its finiteness, the graph remains intractable in practice when considering all augmentation views. This complexity necessitates the use of SORL to learn a low-rank representation space, enabling us to approximate the graph rather than directly optimizing $\mathcal{L}_{mf}$. 3. *The order of the augmentation probability.* In the main theory sections (4.1 & 4.2), we do not make any assumptions regarding the order of augmentation probability since it is inherently determined by the augmentation strategies. #### **- Justification on the graph setup for the toy example.** We are happy to provide justification for the reviewer’s concern about the setup in the toy example from several aspects: 1. *Why only consider shape and color?*. We fully agree with the reviewer's insight that real 3D-shape images include a broad range of properties, including color, shape, angle, position, size, material, etc. However, our primary objective with the toy example is to elucidate the main intuition. Therefore, we have chosen to focus solely on the two most salient features, color and shape, in constructing our toy augmentation graph. 2. *Why assume a non-zero $\tau_c$?*. Given our simplification of the property set to {color, shape}, we also presume the existence of augmentation strategies capable of permuting colors and shapes. As a result, $\tau_c$ and $\tau_s$ are two non-zero values in our toy example. 3. *Why assume $\tau_c$ > $\tau_s$?* It's essential to note that this assumption only serves to facilitate our illustration and does not impact the main theorem or empirical validation. Our principal goal here is to demonstrate that the addition of labels can alter clustering outcomes. In Figure 3(a), we depict how adding labels to cubes can reverse the incorrect trend where samples are clustered by color. Similarly, one could also construct a toy example where $\tau_s$ > $\tau_c$, demonstrating that adding labels to "red samples" can invert the clustering results by shapes. #### **- Empirical illustration of the main theorems.** Great suggestions! In response, we are pleased to introduce numerical verification to affirm the correctness of Theorems 4.1 and 4.2 with our toy example. **Due to the space limitation, we provide the details in the [additional rebuttal pdf](https://openreview.net/attachment?id=09O88IQijF&name=pdf)**. It is important to note that applying Theorems 4.1 and 4.2 necessitates explicit knowledge of the adjacency matrix. However, obtaining such a matrix becomes intractable in real datasets, particularly when considering all augmentation views of source images. #### **- Justification on results of SimCLR.** We are happy to collaborate with the reviewer to justify the baseline results, which are directly cited from the prior work ORCA (Cao et al., 2021). To reproduce the results, we train SimCLR on CIFAR-10, using the code available at https://github.com/HobbitLong/SupContrast. Our training achieved an All/Novel/Seen Accuracy of 61.1/81.0/72.1, which is notably higher than what was reported in ORCA. Furthermore, we noted that K-means results tend to fluctuate when accuracy is low and class number is small, as with CIFAR-10. An unfortunate initialization of class centers can significantly downgrade clustering performance. This observation might provide insights into why the magnitude order can sometimes diverge from expectations. #### **- Other comments and questions.** + *Measuring SORL/SimCLR on Linear Probing/KNN classifier protocol.* As suggested, we provide the comparison of SimCLR and SORL below. By having more label information, SORL performs stronger than SimCLR on CIFAR-10, which follows the intuition. | Method | Linear Probing | KNN classifier| |--|--|--| | SimCLR | 87.6 | 86.2 | | SORL | 92.9 | 91.4 | + *Performance on LSD-C*. For the reviewer's interest, the All/Novel/Seen accuracy on CIFAR-10 is 82.1/88.8/87.6 respectively. + *Why do we use 512 as a batch size?* To ensure a fair comparison in training setup, we follow the protocol in existing works including ORCA (Cao et al., 2021) and OpenCon (Sun et al., 2023). + *What do different islands of points mean in Figure 3(b)?* Great catch! When we generate these 3D shapes, there are other variants like materials (rubber, metal) and sizes (big, small). Therefore, the sub-clusters are very likely to represent other variants. #### **- Writing suggestions.** + *The conclusion of UMAP visualization.* Thanks for the suggestions! We will revise the claim as suggested in Figure 3(b). + *The explicitness of the labeling perturbation setup.* Great comment! See changes in the global response. + *Adjustment in the layout of Figure 2.* Fixed, thanks for the suggestion! + *Add the limitation discussion.* Absolutely! See global response. --- Rebuttal Comment 1.1: Comment: Thank you for your reply! **Discussion of the augmentation graph:** Your clarifications are helpful, especially the mention of generative augmentations and the limited relevance of the order of augmentation probabilities for the Theory in Sec 4. **Justification on the graph setup for the toy example:** - *Why only consider shape and color* Of course, a toy example deliberately has fewer properties than a real example. My confusion stemmed from a misunderstanding of the augmentations. I had not realised that all the augmentation probabilities are simply completely independent of the position and size of the shape, e.g., an object has the same probability to be augmented to any other position or size without changing shape and color, namely $\tau_1$. This should actually have been sufficiently clear from Fig 2a). So in a way, there is "perfect augmentation" to unlearn the irrelevance of size and position. - *Why assume a non-zero $\tau_c$?* I now understand that the order of the augmentation probabilies does not matter for correctness of the theory in Section 4. But the connection of unlabelled to labelled data, i.e., $\mathfrak{l}$ does enter Thm 4.1 and Thm 4.2. In that light, I appreciate the first part of the proposed limitation statement in the general comment. - *Why assume $\tau_c > \tau_s$?* Thanks for clarifying that the validity of the theory in Sec 4 does not hinge on the order of these probabilities. Do I understand correctly that choosing $\tau_c > \tau_s$ is a deliberately problematic setting, in which augmentations maintaining the category of interest (shapes) are less likely than augmentations that change it and instead maintain a nuisance property (color)? Without labels this augmentation setting would naturally cluster by color rather by shape, as depicted in Fig 3a) Adding the labels for shape cube rectifies the erroneous focus of the augmentations on the color. If one instead chose $\tau_c < \tau_s$, no fix would be required because the resulting clustering is automatically by shape (although perhaps there is still a marginal benefit from using labels for some shape). I think I had missed the first part of this, i.e., that there is anything to fix because the augmentations strengths do not align with the category of interest. From what I understand, a situation similar to $\tau_c > \tau_s$ is also the one in which Thm 4.2 offers the largest benefit: high connection to the labelled class and high inter class similarity (from large $\tau_c$) and low intra-class similarity (from low $\tau_s$). Thanks again for the clarification! **Empirical illustration of the main theorems:** Thank you for including these results in the additional pdf! Especially the main term in Thm 4.1. is impressively accurate. Thanks for clarifying in the proposed limitations section that one needs access to the full adjacency matrix to evaluate the terms in Thms 4.1, 4.2. Sorry if this is a trivial question, but I do not get the connection entry in Tab 2. From the top of page 13, I thought it would be $\mathfrak{l}_{sphere} - 1 / N = 0.5\tau_c -0.0001 = 0.1499$. What am I missing? **Justification on results of SimCLR:** Thank you for computing linear probing and kNN classifier accurarcies on your SimCLR representations. This helps dissecting which part of the performance drop is due to the evaluation protocol and which is due to the representations themselves. It seems that, indeed, the main difference in numbers is due to the evaluation setting. Nevertheless, the linear probing and kNN classifier accuracies are still lower than what as been reported for a ResNet18 on CIFAR10, e.g., in https://openreview.net/pdf?id=B8a1FcY0vi, Tab. 1. It will probably be difficult and not curcial to pinpoint exactly where the difference comes from. Just to confirm my understanding: The SimCLR representation learning is completely unsupervised and uses all of CIFAR10 (train+test set) during this unsupervised training, right? **Performance on LSD-C:** Many thanks for evaluating this additional method. I find it intruiging that it performs on par with RankStats, despite begin entirely unsupervised. --- Reply to Comment 1.1.1: Title: Thanks for your response! Comment: Thank you for your invaluable feedback! We're pleased to note that our clarification has resonated, and your understanding of the augmentation graph and toy example aligns perfectly with our intent. Addressing your further queries: + *On the setting of $\tau_c > \tau_s$:* Yes! We choose the setting deliberately so that the clustering will be misled by the colors. It subsequently offers the space to improve by clustering samples on shapes, upon incorporating labels. Your interpretation from the view of Thm 4.2 also perfectly aligns with the intuition! + *On the value of $\mathfrak{l}_{sphere}$*: Good catch! At a high level, the discrepancy originates from the scaling factor. To elucidate, our theory hinges on the premise that T's row sums to 1. Given $1 = \tau_1 >> \tau_c > \tau_s$ (footnote on page 5), we can follow the equation in the Appendix (top of page 13), which is simplified and has no normalization. We can fix the glitches by setting $\tau_1 = 0.6$, $\tau_c = 0.3$, $\tau_s=0.1$, the value of $\mathfrak{l}\_{sphere}$ now aligns with the reviewer's proposition of 0.15. In this case, concerning the estimation gap, both $\Delta_{kms}(\delta)$ and the estimation now become 0.0027 (when $\delta=0.01$). For the broader audience, we will make the simulation code open-sourced together with the SORL implementation upon the paper's publication. + *On SimCLR's training setup*: The discrepancy in the SimCLR's performance requires a deeper examination of the different training settings. For the reviewer's reference, we directly use the training script at https://github.com/HobbitLong/SupContrast. Regarding the question, the answer is **yes** for the purely unsupervised setting but **no** for the `train+test` split. Specifically, we perform the unsupervised training on the `train` split but report KNN/LP performance on the `test` split. Thank you again for the comments and suggestions to help us improve the manuscript!
Summary: This paper presents spectral open-world representation learning that aims to learn low-rank approximation of a constructed adjacency matrix. From this perspective, the authors study how the label information help (or hurt) the classification performance by analyzing the error bound. Experiments on image classification benchmarks show the effectiveness of the proposed approach. Strengths: 1. The formulation of open-world representation learning problem as a low-rank approximation of a weighted graph adjacency matrix is novel and reasonable. 2. The representation is clear. The provided illustrative example is helpful for understanding the theory. 3. The new ORL approach itself is simple yet effective. Weaknesses: 1. The theoretical analysis presented in the main text is only applicable to the proposed SORL approach. Generalizing to other approaches needs independent considerations as stated by the authors in section 6. 2. The empirical validation seems somewhat disconnected with the main theoretical result (theorem 4.1 and 4.2) as it only focuses on the performance of SORL. It would be better to conduct experiments to directly validate the main theorems e.g. by showing the connection between performance improvement by adding different label information and factors affecting $\Delta_{\pi_s}(\delta)$. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Is it possible to generalize the analysis to the SSL and supervised learning settings? 2. Is the spectral gap in theorem 4.1 crucial for the result? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Please see the weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful questions! Below we address each of your comments in detail. #### **- Generalization of the theoretical analysis.** Fair concern! We are happy to expand our thoughts on the generalization of our theoretical analysis within a broader context. The primary objective of introducing SORL is to establish the first theoretical framework that has been lacking in the field of open-world representation learning for several years. We believe that this framework will open up new avenues for a deeper understanding of the effects of labeling perturbation within the community. Specifically: + *SORL answers the fundamental question common to all methods in open-world representation learning*. At a high level, SORL analyzes how the added label changes the representation space that leads to different clustering outcomes. This finding can be generalizable to other methods which may differ in the way of incorporating new labels. + *SORL paves the way to formally understand open-world representation learning methods.* We have laid the groundwork for understanding the closed-form solutions of these learning algorithms from a theoretical standpoint. Viewed from this perspective, the same analytical strategy employed in this paper remains applicable. For instance, we demonstrated in Section 6 that "SimCLR+SupCon" is a factorization of another regularized form of the adjacency matrix, where the perturbation analysis can be equally applied. + *SORL is also appealing for practical usage*. Our framework is also empirically appealing to use since it can achieve similar or better performance than existing methods on benchmark vision datasets. In particular, on CIFAR-100, we improve upon the best baseline OpenCon (Sun et al., 2022) by 3.4% in terms of overall accuracy. #### **- Experiments validating theoretical results.** Thank you for the excellent recommendations! To delve into the performance disparities among cases with different types of label information, we've structured the experimental setup described below. We've classified the settings into three categories: `unsup`, `sup-weak`, and `sup-strong`. + The `unsup` category represents cases without any label information. + The `sup-weak` setting involves adding labels to the 50 sub-classes within the first 10 super-classes. + The `sup-strong` category entails the distribution of labels evenly across 50 sub-classes spanning all super-classes. For clarity, we illustrate the configurations for sup-strong and sup-weak in the tables below, showcasing 20 CIFAR-100 classes within four super-classes. Within this configuration, the names of the known classes (those receiving labels) are highlighted in **bold**. This design ensures that the `sup-strong` setting fosters a more significant relationship between known classes with label information and novel classes compared to the `sup-weak` setting. The empirical results presented in the table below substantiate this insight. The clustering accuracy for the novel classes is ranked as: `sup-strong` > `sup-weak` > `unsup`. + **Empirical comparison** | Setting | Method | All | Novel | Seen | |--------|--------|----------|-------|-------| | `unsup` | SORL | 37.3 | 40.2 | 38.3 | | `sup-weak` | SORL | 49.8 | 46.4 | 66.0 | | `sup-strong` | SORL | 56.3 | 53.5 | 67.1 | + **Setting `sup-weak`:** | Super-class | Sub-class | |---------------|----------| |aquatic mammals | **beaver, dolphin, otter, seal, whale**| |fish | **aquarium fish, flatfish, ray, shark, trout**| | ... |... | |vehicles 1 | bicycle, bus, motorcycle, pickup truck, train| |vehicles 2 | lawn-mower, rocket, streetcar, tank, tractor| + **Setting `sup-strong`:** | Super-class | Sub-class | |---------------|----------| |aquatic mammals | **beaver, dolphin, otter**, seal, whale| |fish | **aquarium fish, flatfish**, ray, shark, trout| | ... |... | |vehicles 1 | **bicycle, bus, motorcycle**, pickup truck, train| |vehicles 2 | **lawn-mower, rocket**, streetcar, tank, tractor| #### **- Generalization to the self-supervised or supervised learning setting.** Our theorem explores the effects of labeling perturbation by comparing the performance between the self-supervised learning setting and the setting with partially available label information. This analysis can be further generalized to the supervised learning setting. It can be regarded as a special case within the open-world learning framework, where all classes are known and every sample is labeled. #### **- The role of the spectral gap.** Our response is in a two-fold manner. + For the clarity of the theoretical results, assuming a large spectral gap is a standard practice in spectral analysis (Joseph et al., 2016; Shen et al., 2022), a convention we also have adhered to in our study. + At a high level, it should be noted that the term remains small in most practical scenarios even in the absence of the spectral gap term. In more detail, the residual term related to the spectral gap, is included in lines 497-498 of the Appendix. For this term to become large, several conditions must be satisfied simultaneously: 1). spectral gap between $\sigma_k$ and $\sigma_{n}$ must be small. 2). eigenvector $v_{n}$ (corresponding to $\sigma_{n}$) in the null space must encode the critical class information that is missing in the feature space; 3). eigenvector $v_{n}$ must align with the vector $\mathfrak{l}$ which encodes the connection to label data. Although it's theoretically possible to construct a scenario where this occurs, we have chosen not to delve further into this in order to focus on conveying the main intuition of our work. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Most of my concerns are addressed. I particularly feel empirical validation of the theory (beyond evaluation of the proposed method) is valuable and encourage the authors to consider incorporating relevant discussions in the paper. I increase soundness to 4.
Summary: This paper formulate the open-world representation learning using the graph. This paper provides theoretical analyses for the framework (Thm 3.1) as well as the framework's performance (Thm 4.1 & 4.2). The experimental results show that the framework outperforms the exiting method in the open-world learning setting. Strengths: Overall, I enjoy reading this paper and have a positive feeling. - Formulation. Of Eq. (6). By Thm. 3.1 this formulation corresponds to the SVD of normalized adjacency matrix Eq. (4). See more on the question (*). - The theoretical insights (Thm 4.1 & 4.2). The analyses using $k$-means measure strengthen the formulation. Particularly, the insight in L251-L258 is interesting; knowing more about green light increases the knowledge about the green apple, but not the flowers. This corresponds to our intuition. Weaknesses: There's little I can say, but if I need to raise one, I would like to point out on the experiment. The experiment is on the only one split, but I am curious the other splits. Ideally it'd be nice to have results like Fig 4 in [79], y-axis accuracy vs x-axis # of the known labels. By seeing this, we can know "how many pictures of the green light do we need to learn in order to know the green apple?" I also would like to know the computational time, see more on the question section (**) From this viewpoint, I would give 6. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How much do the things need to be different in order to learn? By learning something, how much the SORL can distinguish some similar things? If we do not know anything about the green apple, can we increase the chance to distinguish the different variety of the green apple by learning green signals? Maybe the difference between the variety of the green apple is embedded in the spectral gap, but do you have any insights on this? - I'm just curious how did you come up with Eq. (6)? - This is not the question but a comment connecting to (*) in the strength section. The Eq.(5) gives the SVD, but in this case Eq.(5) gives the eigendecomposition. Since the normalized Laplacian $L$ is defined as L := I -A, the Eq. (5) gives the k smallest eigenvalues. Thus, since the $k$ smallest eigendecompoition of the normalized Laplacian corresponds to the relaxed solution of the normalized graph cut, the SORL is actually corresponds to the relaxed solution of the normalized graph cut (see Sec.5.3 in [i]). From this viewpoint, as far as I understand, Thm 3.1 can be more powerful for the spectral clustering community -- the Eq. (6) corresponds the result of the spectral clustering (i.e., the relaxed solution of the normalized graph cut). With this, the proposed method is not surprising (in a good sense); the proposed SORL is an open-world learning hinted normalized graph cut. From this view, I'd like to connect the point (**), if this method is faster than SVD, this indicates that there can be some faster way of the spectral clustering? - This is also minor but the authors might want to cite [i], which is the one of the most cited paper in spectral clustering. [i] Von Luxburg, Ulrike. "A tutorial on spectral clustering." Statistics and computing 17 (2007): 395-416. ---- POST REBUTTAL I'm satisfied with the authors' rebuttal, therefore I increase my score from 6 to 7. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful questions! Below we address each of your comments in detail. #### **- Experiments with different labeling ratios.** Great suggestions! We provide the comparison by reducing the labeling ratio from 50% (default) to 25%, 10% and 0% on CIFAR-100, while keeping the number of known classes to be the same (i.e., 50 in CIFAR-100). The results suggest that a small portion of labels can lead to a strong improvement in the accuracy for both known and novel classes. | Labeling Ratio | Method | All | Novel | Seen | |--------|--------|----------|-------|-------| | 50% | SORL | 55.8 | 51.7 | 68.4 | | 25% | SORL | 54.4 | 50.5 | 64.4| | 10% | SORL | 51.6 | 50.4 | 60.2| | 0% (unsupervised) | SORL | 37.3 | 40.2 | 38.3 | #### **- Insights on learning the variety of similar samples.** Sure! We are pleased to share insights on the subject of distinguishing similar samples, which revolves around three sub-questions. 1. *How much can SORL distinguish similar things?* The ability to differentiate between similar items using the SORL algorithm depends on the embedding size $k$, as determined by the neural network. Specifically, the size $k$ correlates to the rank of the approximation matrix used within the SORL algorithm. A larger $k$ provides a greater capacity to perceive subtle differences among similar samples. In our experiments on the CIFAR dataset, we set $k=1000$ to allow the encoding of a broad spectrum of variations, going beyond mere class distinctions. 2. *How to increase the chance to distinguish the different varieties?* a) Should the learning capacity permit, increasing the dimensionality $k$ can extend the "volume" to encompass more varieties. b) Enhancing the augmentation strategy contributes to a more precise adjacency matrix, assisting the model in better discerning subtle differences between varieties. 3. *What is the relationship between the variety and the spectral gap?* At a high level, the magnitude of singular values ($\sigma$) reflects the importance of a specific variety (represented by a singular vector). A larger spectral gap implies a significant variation in importance levels. For instance, the $\sigma$ for features that distinguish red from green would be considerably larger than the $\sigma$ that differentiates light green from dark green. #### **- Rationale in deriving Eq. (6).** We derive Eq. (6) by unfolding the Frobenius norm of $\mathcal{L}_{mf}$. A high-level explanation of proof is in L153 to L155. #### **- Application of theorem to a broader spectral clustering community.** We are grateful for the reviewer's insightful comments and agree with the potential of applying our algorithm to traditional spectral clustering. In response to the points raised by the reviewer, we offer detailed discussions on the following subjects: + *Set up for learning tabular data.* Our SORL algorithm is constructed using an augmentation graph where each element represents the probability of considering two samples as positive pairs. The augmentation strategy can be tailored for tabular data; for example, by adding noise to continuous data or permuting items in discrete data. This approach essentially resembles distance-based graphs, such as the k-nearest neighbor graph. + *Applying the main theorem to the spectral clustering community*. Definitely! We can directly apply Theorem 4.1 (assuming the reference to 3.1 is a typo in the comments) to spectral clustering, provided it is built upon the augmentation graph as discussed in point (a). + *Computational aspects.* We concur with the reviewer's observation that SORL can be more efficient than running a full SVD on the entire graph. This efficiency is due to SORL's utilization of a sampling strategy. It's noteworthy to mention that while SORL is confined to the augmentation graph, SVD can be applied to all graphs. + *Citing [i]*. Thanks for pointing it out! We have added it to our revised draft. --- Rebuttal Comment 1.1: Title: Increasing my score from 6 to 7 Comment: Thank you very much for the rebuttal. I am satisfied with the additional experiments. I think that the experimental results make sense. I'd like to increase my score from 6 to 7. By camera-ready (you don't have to do in the rebuttal period in my opinion), I'm curious for the other split for the other methods, ideally one from open world based (e.g., ORCA) and one from non-open world (e.g., FixMatch), if this makes sense. Sometimes unsupervised is not possible, but I believe that probably you can use only one instance instead. The reason why I propose such experiment is that the SORL does not seem to grow fast in "novel," which is an important category for the open world learning setting as far as I understand. From 0% to 10% the accuracy seems to grow fast, but from 10 to 25% accuracy does not change, to 50% the accuracy increase seemingly marginally. In contrast, "seen" seems to grow, which makes sense. But we can only judge this by comparing with others. If you figure out that the SORL is "slow" for the novel by doing the additional experiments, do you have any insights for slowness from the theory perspective? Is 10% all you need for novel? I expect to have some answers for this by camera-ready. Even if your method is somewhat weaker than the other methods in some sense from this perspective, I am assured your method is promising since your method almost beats the others at 50%. I guess that the additional experiments will not reduce the value of this study and therefore I request this by the camera-ready, only if this makes sense. At the same time, I understand that the additional experiments for the other methods may not be possible since the other results may be brought from the other papers. Thank you very much for the other answers for my questions. All of these make sense. I enjoyed reading the rebuttal as well as the manuscript. --- EDIT: At this point I do not know how to update the score in my main rebuttal as edit button disappears. Thus the visible rating remains 6. But don't worry, my rating in my mind at this point (as of 10th Aug) is 7. I'll update as soon as I figure out how to do so.
Summary: The paper tackles the domain of open-world representation learning, which aims to learn representations that can correctly cluster samples in the novel class and classify samples in the known classes by utilizing knowledge from the labeled data. Notably, the motivation of this paper is to provide a theoretical foundation in this area by introducing a novel graph-theoretic framework specifically designed for open-world settings. Theoretically, the crux of the framework involves characterizing clustering through graph factorization. A new algorithm called Spectral Open-world Representation Learning (SORL) is introduced, which essentially involves spectral decomposition on the graph. The authors provide theoretical guarantees by deriving a provable error bound on the clustering performance for both known and novel classes. Practically, the paper also offers empirical evidence, demonstrating that SORL is competitive with, or even superior to, existing methods on benchmark datasets. Strengths: The paper is well-written with clear motivation and structure. Overall interesting problem; good mathematical exposition; solid theoretical results and analysis; interesting and good experimental results Weaknesses: 1. The experiments are not sufficient. Other large commonly used benchmark dataset is not used, like ImageNet-100. 2. The paper has a very good theoretical analysis. But it is still unclear what contributes to the performance improvement. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. It would be interesting to see the results in a larger benchmark dataset, ImageNet 100. 2. What is the effect of using different known and novel classes ratio besides 50%? Like, 0.25, 0.1. 3. How do you choose the hyper-parameters, $\eta_{u}$, $\eta_{l}$? 4. In the supplementary material, it is mentioned that pre-training is used. I wonder why this pre-training is necessary for this method and what it will be without pertaining. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The broader impact has been discussed. It has a positive potential impact on the community. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful questions! Below we address each of your comments in detail. #### **- Results on ImageNet-100.** Sure! As suggested, we report below results on the ImageNet-100 benchmark, which is commonly compared in the literature. For ImageNet, the training and evaluation setting is consistent with prior works (ORCA, GCD, OpenCon, etc), which have a division of 50 known classes and 50 novel classes. The empirical results suggest that SORL is comparable to the current state-of-the-art methods with a <1% gap in terms of overall accuracy. Beyond chasing the benchmarks, the significance of our work lies in providing an in-depth theoretical understanding of the open-world representation learning task (which is urgently lacking in the field). | Method | All | Novel | Seen | |--------|----------|-------|-------| | FixMatch (Kurakin et al., 2020) | 34.9 | 36.7 | 65.8 | | DSL (Guo et al., 2020) | 30.8 | 32.5 | 71.2 | | CGDL (Sun et al., 2020) | 31.9 | 33.8 | 67.3 | | DTC (Han et al., 2019) | 21.3 | 20.8 | 25.6 | | RankStats (Zhao & Han, 2021) | 40.3 | 28.7 | 47.3 | | SimCLR (Chen et al., 2020a) | 36.9 | 35.7 | 39.5 | | ORCA (Cao et al., 2022) | 76.4 | 68.9 | 89.1 | | GCD (Vaze et al., 2022) | 75.5 | 72.8 | 90.9 | | OpenCon (Sun et al. 2023) | 83.8 | 80.8 | 90.6 | | SORL (Ours) | 82.9 | 80.4 | 89.2 | #### **- Discussion on the performance improvement over baseline methods.** We are happy to share insights on the differences in the representation learning strategies that have been implemented in baseline methods. Two of the most recent methods, GCD and OpenCon, employ a combinatory training approach that leverages both supervised contrastive learning (SupCon) with labeled samples, and self-supervised contrastive learning (SimCLR) with unlabeled samples. A detailed theoretical explanation of this strategy is available in Appendix D. At a high level, the feature space derived from “SupCon+SimCLR” is given by the top-k eigenvectors of a matrix different from SORL. This insight serves as an invitation to other researchers to further explore and provide more comprehensive comparisons between different representation learning strategies. #### **- Results with different known/novel class ratios.** Great suggestions! We provide a comparison by reducing the known class number from 50 (default) to 25 and 10 on CIFAR-100, while keeping the labeling ratio to be the same (i.e., 50%). The results suggest that using more known classes will help the clustering performance of the novel classes. | Known Classes Number | Method | All | Novel | Seen | |--------|--------|----------|-------|-------| | 50 | SORL | 55.8 | 51.7 | 68.4 | | 25 | SORL | 50.1 | 47.7 | 68.5 | | 10 | SORL | 48.6 | 47.3 | 66.0 | #### **- Strategy in choosing hyperparameters $\eta_u$ and $\eta_l$.** Good question! We follow the validation strategy proposed in prior works of open-world representation learning (See Appendix I in Sun et al., 2023). Specifically, we split the classes in $\mathcal{Y}_l$ equally into two parts: known classes and “novel” classes (for which we know the labels). Moreover, 50% of samples in the selected known classes are labeled. The constructed validation dataset is used to select the hyper-parameters by grid searching. #### **- The necessity of using a pre-trained model.** Fair concern! We use the pre-trained model mainly for several reasons: 1. *Adhering to Protocols in Prior Works*. The learning setting that encompasses both labeled and unlabeled data, along with a mixture of known and novel classes, was first introduced in ORCA (Cao et al., 2022). This approach employs the ORCA algorithm on a ResNet model pre-trained using self-supervised loss, a strategy also applied in subsequent works (such as OpenCon, GCD, etc.). To maintain a fair comparison, we adhere to this established convention. 2. *Efficiency in Training*. Importantly, by utilizing a pre-trained backbone, both prior works and SORL focus on training only the projection layers and the last block (layer4) of the ResNet, while keeping the preceding blocks fixed. This approach is more effective in training by consuming less GPU memory and significantly reducing the number of gradient update steps. 3. *Emulating the Real-World Pipeline*. In the industry, it's standard practice to first train a model using self-supervised loss on a large corpus of unlabeled data. Leveraging such a pre-trained model enables a wide array of downstream tasks depending on the additional labeled information available. The exploration of the label perturbation on the pre-trained model constitutes a key research question investigated in this paper. What if we do not apply the pertaining? As we have shown in the following table, they exhibit similar performance. | Pretrain Epochs | Training Epochs | Method | All | Novel | Seen | |--------|--------|----------|-------|-------|-------| | 1200 | 400 | SORL | 55.8 | 51.7 | 68.4 | | 0 | 1600 | SORL | 55.4 | 52.2 | 67.8 | --- Rebuttal Comment 1.1: Comment: Thanks for your responses. Having read the rebuttal, I will increase my initial score. --- Reply to Comment 1.1.1: Title: Thank you! Comment: We would like to thank the reviewer for taking the time to read our rebuttal and for your positive feedback! Best, Authors
Rebuttal 1: Rebuttal: We thank all the reviewers for their constructive and valuable feedback. We are honored that the reviewers acknowledge the novelty of our graph-theoretic framework (R1, R3, R4) with excellent contribution (R1, R4) and soundness (R1). Multiple reviewers value the theoretical nature of our paper (R4), finding theoretical insight to be interesting (R2) and solid (R1). Beyond theoretical insight, the reviewers recognized the practical values of the framework (R4), with the approach to be simple and effective (R3) and empirical results outperforming (R1, R2, R3) the existing methods in the open-world learning setting. We are equally glad with R2's enjoyment in reading the paper, and the comments with excellent presentation (R1, R3), well-written (R1, R4), clear motivation and structure (R1) of our paper. We have addressed the reviewers’ comments and concerns in individual responses to each reviewer. (\* As abbreviations, we refer to **Reviewer iP9L** as R1, **Reviewer 7sHp** as R2, **Reviewer 66eW** as R3, and **Reviewer 1J6p** as R4 respectively.) As suggested by R4, we added the following paragraphs to the draft: + Added the paragraph “Analysis goal” in Section 2: *Our analysis goal is to comprehend the role of label information in shaping representations for both known and novel classes. it's essential to note that our theoretical approach aims to understand the perturbation in the clustering performance by labeling existing, previously unlabelled data points within the dataset. By contrasting the clustering performance before and after labeling these instances, we uncover the underlying structure and relations that the labels may reveal. This analysis provides invaluable insights into how labeling information can be effectively leveraged to enhance the differentiation of both known and novel classes.* + Added the limitation discussion section. *Limitation: Our theoretical framework has two potential limitations to be considered in practice: a) The augmentation graph serves as a potent theoretical tool for elucidating the success of modern representation learning methods. However, it is challenging to ensure that current augmentation strategies, such as cropping, color jittering, or Gaussian blurring, can transform two dissimilar images into identical ones. b) The utilization of Theorems 4.1 and 4.2 necessitates an explicit knowledge of the adjacency matrix of the augmentation graph, a requirement that can be intractable in practice. In light of these limitations, we encourage further research to enhance the practicality of these theoretical findings.* Pdf: /pdf/dec5b4f2abe41913560844f15694e6311d70391c.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Prompt-based Node Feature Extractor for Few-shot Learning on Text-Attributed Graph
Reject
Summary: This paper tackles the problem of representation learning on text-attributed graphs (TAGs), which has gained significant attention in recent years. The primary focus of this research is on few-shot node classification. Existing two-stage methods have been unsuccessful in effectively capturing the complex relationship between graph structure and textual features, as well as incorporating downstream task information. To address these limitations, the authors propose a novel approach called G-Prompt, which aims to simultaneously capture the graph topology and downstream task information. G-Prompt achieves this by enhancing the pretrained language model (PLM) with graph awareness. At the last layer of the PLM, a graph adapter is introduced, allowing the model to become aware of the graph structure. This graph adapter can be combined with task-specific prompts, enabling the generation of task-specific and graph-aware node representations. Strengths: - Clear motivation of the method - Well-written summary of existing work (taxonomy, pros and cons) - Good results in few-shot node classification - Valuable insights into the utility of graph information and prompts for both LMs and GNN based methods Weaknesses: - **Some tables and figures are not self-contained, such as Figure 1 and Table 2.** The caption is too concise for the readers to follow. For example, in figure 1, the authors are expected to summarize the proposed framework in one or two sentences to aid comprehension; For table 2, the meaning of the bold and underline should be explicitly specified to ensure clarity and understanding. - **Background, method, and implementation of few-shot learning is missing.** As few-shot learning is less common in the graph community compared to supervised learning, it is essential to provide additional explanations and context for unfamiliar readers. Moreover, when comparing to some existing approaches, such as GIANT, which were originally designed for supervised learning, it should explicitly discuss how these methods can be adapted or extended for few-shot learning scenarios. Also, some compared methods, such as GIANT, were intended for supervised learning. How to adopt them for the few-shot learning is not straight-forward and should be introduced. - **The efficiency of the proposed method is stated but not experimentally proven.** In section 3.3, the authors claim that training the graph adapters can be achieved with very few computing resources. However, to support this claim, it is necessary to provide a comparison with the compared methods (w.r.t. training parameters, memory usage, training speed, and training time, etc). This empirical evidence would provide stronger support for the efficiency of the proposed method. Furthermore, it is worth noting that the training epoch for the graph adapter is 50 for ogbn-arxiv, while typically fine-tuning it for 3-5 epochs is sufficient[1]. Therefore, it raises concerns whether the additional training epoch may negate the benefits introduced by training only the graph adapter. **Reference:** [1] Learning on Large-scale Text-attributed Graphs via Variational Inference, ICLR 2023. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - According to section 3.4, the node features are fed into subsequent GNNs but in Figure 1, the GNN encoder is not part of the framework, which is a little bit confusing. - The dimention of the node features need to reduced for practical usage (section 3.4). And it is heuristic to select the optimal M. I am wondering how sensitive the graph prompter is to the value of M, and if there is any discussion on the selection of the optimal M. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: - The proposed framework is not directly compatible with large language models (LLMs). Nowadays, when people refer to prompt-based methods, they mainly focus on utilizing large language models such as GPT. These models are known for their powerful text modeling and reasoning abilities, which have the potential for performing few-shot learning on text-attributed graphs (TAGs) [2]. However, the framework proposed in this paper is not directly compatible with LLMs, as G-Prompt requires access to LLMs' latent embeddings or logits, which are not provided by models like GPT-3.5 and GPT-4. **Reference:** [2] Graph-ToolFormer: To Empower LLMs with Graph Reasoning Ability via Prompt Augmented by ChatGPT. arXiv preprint arXiv:2304.11116 (2023). Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for reading our paper and affirming the motivation, summary of related work, results, and insights of our paper! Also, thank you for your valuable suggestions and questions regarding our paper! Based on your comments, we have done the following work: **1. Paper revision:** * Figures & Tables: We have revised Figure 1, Table 2 (Weakness 1 & Question). * Few-shot setting: We have revised the background introduction on graph few-shot learning, and justification for comparing baselines (Weakness 2). **2. Add time complexity analysis and experimental report (Weakness 3):** * We analyzed time and space complexity and reported runtimes of different baselines. **3. Add analysis on feature dimensions (Question 2):** * We added analysis experiments on varying M columns. **4. Add experiments on GPT2 (Limitation 1):** * We added related experiments based on GPT2 and prospects of combining with LLMs in future work. Here are detailed responses to your suggestions: **1. Thank you for the suggestions on our figures and tables (Weakness 1 & Question 1).** We have added more descriptions for the figures and tables. Specifically, in Figure 1, we have included an illustration showing the input to the GraphAdapter, and detailed descriptions of the input and output during GraphAdapter training and inference in the legend. In addition, we thoroughly checked all tables in the paper and added more detailed captions. Here are the detailed responses to each comment/question in English: **2. Thank you for the suggestions on our few-shot learning setup (Weakness 2).** * First, let me explain the motivation for our few-shot learning setup: We proposed graph few-shot learning because we found that current TAGs modeling methods like GLEM and GIANT are mostly validated on Ogbn-arxiv and Ogbn-product, while in practice the number of labels may be much smaller. In the real world, the amount of labels is often limited. This motivates us to think about how to do TAGs modeling when the number of labels is small. To formalize this problem, we borrowed the few-shot learning setup from prior works [1], i.e. exploring how to do few-shot learning on graphs. Under this setup, we analyzed the limitations of current methods and proposed G-Prompt. Experiments validate that our method can effectively combine PLMs, and prompts, and integrate graph and task-relevant information to perform few-shot and zero-shot inference on graphs. * Secondly, we made the following modifications to the paper based on your confusion: - Introduction: Explain that we adopt the few-shot setup in prior works to formalize our problem - how to do TAGs inference when labels are limited. - Related work: Introduce current graph few-shot learning works. Explain that they do not explore combining with TAGs. They focus more on new class addition, and how to do classification. The setup has large differences from ours, so these works cannot be directly compared as baselines. - Experiments: we added GLEM as a baseline. When introducing GIANT and GLEM, we emphasize that these methods lack specific designs catering to the few-shot setup. GLEM and GIANT also have large differences in setup from G-Prompt - GLEM aims to achieve end-to-end training on TAGs, while GIANT and G-Prompt employ self-supervised training, aiming to obtain representations that generalize to various downstream tasks. In the analysis of our results, we emphasize that our method effectively leverages both task-relevant information and graph information for inference through the integration of language model prompting. Notably, our method outperforms both GIANT and GLEM under scenarios with limited labels. **3. Thank you for the suggestions on the efficiency of our method (Weakness 3).** Since current TAGs methods adopt different language models, GLEM uses DeBERTa while GIANT uses bert-base-uncased, and GraphAdapter mainly adopts RoBERTa, ALBERT, and GPT2 which are more compatible with prompting. These PLMs inherently have differences in inference speed, so a direct comparison of runtimes is not fair. To enable fair comparison, we analyzed the time and space complexities of the three methods from pretraining and downstream training perspectives. We also added runtime comparisons in the appendix, but these results are just for reference. See more details in our response for Fbx8 and Table 5. **4. Thank you for the question on the dimension of node features in our method (Question 2).** Following your suggestion, we conducted analysis experiments on varying dimensions. The results are in Table X. Overall, performance stabilizes after 512. As the result shots, we recommend searching the dimension as a hyperparameter in the practice of G-Prompt. #### Table 7. Performance of G-Prompt with different dimensions of node features |Dimension|256|512|768|1024|1280| |:-:|:-:|:-:|:-:|:-:|:-:| |Arxiv|0.5854|0.5964|**0.5971**|0.5719|0.5955| |Instagram|0.5730|**0.5917**|0.5745|0.5803|0.5811| |Reddit|0.6080|0.6141|0.6183|0.6177|**0.6215**| **5. Thank you for the concern about combining with LLMs (Limitation 1):** Firstly, we added experiments based on GPT2 and found G-Prompt also adapts it well. Although currently, GPT-3.5/4 logits are not available, there are many open-sourced LLMs like LLAMA2 [2], ChatGLM2 [3], which adopt the same loss as GPT2. So theoretically our method can combine with large models. In fact, we did some experiments on LLAMA2-13b, and the conclusions are similar to GPT2 experiments, but due to the space limit of the paper, we plan to write a separate report on G-Prompt experiments on large models. **We hope our responses have addressed your questions. We look forward to further suggestions and feedback from you.** **Reference** [1] Meta-GNN: On Few-shot Node Classification in Graph Meta-learning. [2] Llama 2: Open foundation and fine-tuned chat models [3] GLM: General Language Model Pretraining with Autoregressive Blank Infilling --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. I will maintain my original scores, as I still hold reservations regarding the final readability and quality of the paper.
Summary: In this paper the authors present a new framework that combines the benefit of Graph models with the Large Language Models. The authors argue that with the current modeling paradigm, the LLMs are trained in a downstream task agnostic fashion although using prompts they could be fine-tuned for specific tasks. Graph networks provide a principled approach of inferencing over structured data (such as over Text-attribute graphs), however there is currently no good mechanism to combine graph inference with LLMs and tune the output from LLMs based on underlying structured data. To alleviate this problem, authors propose G-prompt - a new technique that allows them to incorporate graph structure into LLM fine-tuning. To do so, they fine-tune the last layer of LLM by utilising the graph structure information using the same pooling approach as used in a graph model. They show that their mechanism performs better than vanilla LLM training. They further show that their method is better than SOTA methods for few-shot node classification. Strengths: 1. In this paper authors present a novel technique to combine LLM with graph models. 2. The proposed technique has wider application potential as a number of real world applications rely on structured data. 3. The method beats the current SOTA techniques for few-shot node classification. 4. They achieve comparable to supervised techniques for zero-shot learning, which is impressive. Weaknesses: 1. The proposed technique certainly combined the best of sequence modeling using LLM and structured data modeling using graph networks. It would be interesting to try out a simple technique of flattening the graph data into a sequence and training an LLM with it. 2. It would be good to discuss the computation and efficiency tradeoffs with different experiments. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Have you tried an experiment where you flatten the graph (for instance by appending tokens with some positional information) and fed it into an LLM. How does it compare with your proposed G-prompt work? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. It would be interesting to see how this method performs when applied to large visual models (LVMs). Do we see similar trends? 2. Other than classification, could this technique be used for other semantic understanding tasks such as similarity matching? Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive acknowledgment of the innovation in our method, its practical applications, and the experimental results concerning few-shot and zero-shot scenarios. We also greatly appreciate your suggestions and the questions you raised. Based on your feedback, we have made the following improvements: **1. Introduce a new baseline (Weakness 1 & Question 1):** We have introduced a new baseline, the "Edge-flattening" and provided details about its implementation, results, and limitations. **2. Add a discussion of efficiency (Weakness 2):** We have included additional experimental results that discuss computation and efficiency trade-offs. These results demonstrate that G-Prompt not only outperforms existing methods but also maintains significant advantages in terms of time and space complexity. **3. Add a discussion of Large Vision Models (LVM) (Limitation 1):** We have explored relevant literature on LVM (Large Vision Models) and discuss the potential integration of G-Prompt with LVM in future work. **4. Add semantic matching tasks (Limitation 2):** Design two few-shot semantic matching tasks on Arxiv and Instagram. Here are detailed responses: **1. We appreciate your suggestion of exploring flattening graphs and feeding them into PLMs(Pre-trained Language Models).** This approach was indeed one of our initial considerations. However, it has certain limitations: (a) Length: Due to constraints on language model inputs, it's challenging to directly input extensive graph information. (b) Efficiency: The same node may be involved in multiple neighbors' features, leading to inevitable duplicate computations. As a response to your suggestion and following our paper's approach, we have defined an "edge-flattening" approach. Specifically, for each edge between two nodes with corresponding text features $S_i$ and $S_j$, we construct a prompt. For example, for Arxiv, the prompt is "[Task prompt], Its' abstract is $S_{i}$. One of its' cited papers' abstract is $S_{j}$." We define the prediction result as $ p_{ij} $. We aggregate all predictions for node i to obtain the node representation $p_i = Pool(p_{ij} | i\in \mathcal{N}(i))$. This approach avoids exceeding the token limit of PLMs compared to using all neighbor features directly. Despite this, exceeding the token limit still occur. In such cases, we truncate $S_i$ and $S_j$ similarly. To provide a better comparison, we introduce the "Prompt-sparse*" baseline, which same as "edge-flattening" but masks $S_j$. Experimental results in Table 4 indicate that "Prompt-sparse-Flatten" performs worse on Instagram and Reddit due to reduced node features, but for instances where "edge-flattening" mitigates missing node information, its performance still surpasses prompts on a single node. This suggests that PLMs possess the ability to understand edge meaning. Since "edge-flattening" have a high time complexity, we conduct neighbor sampling for Arxiv with 5 neighbors. It may affect its performance. **2. We appreciate your suggestion to discuss time and computation efficiency.** Comparing time complexities of different methods is challenging due to varying compatible base language models. Therefore, we estimate time complexities as follows: Inference time/space complexity for a single node for the language model is $T_{infer}$ and $J_{infer}$, and training time/space complexity is $T_{train}$ and $J_{train}$. Complexity for non-linear transformations of PLM representations is $T_{MLP}$ and $J_{MLP}$. GIANT's [1] complexity is equivalent to the time required for fine-tuning the PLM, i.e., $O(N \times T_{train})$ for training and $O(N \times T_{infer})$ for inference. GLEM[2] has similar complexity to GIANT. Our approach involves a single inference pass of PLMs, after which all operations are independent of PLMs. GraphAdapter predicts tokens based on a few randomly selected edges for each word, resulting in $O(|S_{all}| \times T_{MLP})$ complexity. $|S_{all}|$ is the total number of training tokens in the TAGs. Hence, our total complexity is $O(N \times T_{infer} + |S_{all}| \times T_{MLP})$. Our primary advantage is independence from $T_{train}$, and $T_{infer}$ can be accelerated by many methods. Considering larger language models where $T_{train}$ >> $T_{MLP}$, our approach holds a significant advantage. In terms of space complexity, our approach doesn't demand loading language model parameters during training, resulting in $O(batchsize \times J_{MLP})$ for G-Prompt compared to $O(batchsize \times J_{PLM})$ for methods involving fine-tuning. Generally, $J_{PLM}$ >> $J_{MLP}$, allowing G-Prompt to accommodate larger batch sizes in memory-restricted GPU environments. We also report efficiency comparisons for reproducibility purposes in Table 5. **3. We appreciate your question regarding limitations and the integration of Large Vision Models (LVM).** Due to the time constraints during the rebuttal, we could only discuss the potential LVM integration with G-Prompt. We have carefully investigated the LVMs-related paper, GraphAdapter can combine with many multimodal LVMs. And we intend to include the discussion/citation of LVMs in the papers' future work. Since the characters' limitation of response, we don't provide an extensive explanation in this section. **4. Thank you for your suggestion regarding the evaluation tasks.** This suggestion aligns well with G-Prompt's design and the paper's theme. We will design two tasks on Arxiv and Instagram: (a) Paper Title Matching and (b) User Post Matching. We promise we will add this experiment before notification. We hope our responses have addressed your questions. We look forward to further suggestions and feedback from you. **We eagerly anticipate engaging in further discussions with you regarding the integration of LVMs and the details of the new tasks.** --- Rebuttal Comment 1.1: Title: Question about updating your score Comment: I noticed your score dropped from 6 to 4, is there some new concern? I'm very eager to know the reason in order to further improve G-Prompt, and I also look forward to receiving your feedback on the new content I've added in my rebuttal.
Summary: The paper feeds manually designed prompts to LLMs to get task-specific text features, instead of BERT-based fixed features. Then a GNN is applied on top of node features for node classification. Few-shot node classification on 3 datasets are conducted for evaluation. Strengths: 1.The motivation is clear and reasonable. 2.Results on few-shot node classification outperform previous methods. Weaknesses: 1.The prompts need to be manually designed, which significantly harms the novelty of this work. It would be more interesting if the prompts can be automatically and jointly learned with the graph part. 2.The writing need to get improved. For example, Figure 1 failed to help me understand the model design of this work. BTW, the word “random” in this figure is misspelled. 3.Experiments are conducted on a single task (few-shot node classification). 4.Related work is not complete. There have been some work focusing on graph+prompt [1,2]. [1]Gppt: Graph pre-training and prompt tuning to generalize graph neural networks [2]Graphprompt: Unifying pre-training and downstream tasks for graph neural networks Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Refer to the weaknesses Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: No. Can this work be adapted to link prediction or graph-level tasks? Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First of all, thank you very much for reading our paper and affirming the problem setting we proposed and the effectiveness of our method. Based on your comments: * we added discussions about soft prompts when introducing model prompts, including how our method can combine with soft prompts, and why we adopt hard prompts in this paper. * At the same time, we thoroughly checked errors in the full text, and designed two new tasks - link prediction and few-shot text matching, to further verify the applicability of GraphAdapter. * In addition, we added references to related work on graph + prompts. Before responding, we would like to briefly recap our paper: **Motivation:** We find most existing TAGs methods are designed and evaluated on Ogbn-Arxiv and Ogbn-Instagram. However, label amounts are often limited in reality. The applicability of current methods to such scenarios is still unexplored. Therefore, this paper aims to investigate a problem overlooked by the graph community - how to do few-shot/zero-shot inference on TAGs. **Method:** Inspired by prompts in few-shot zero-shot learning, we propose G-Prompt. Its main goal is to incorporate graph information into the language model prompting process, so the process can acquire both task-relevant information through prompts and capture graph information. **Experiments:** In the few-shot learning setup, we prove G-Prompt can effectively integrate graph and task prompt information, outperforming SOTA TAGs modeling methods given small samples. We also achieve the first zero-shot inference on TAGs - G-Prompt utilizes graph information and infers node properties better than PLMs alone under zero-shot. **Insights/Contributions:** (a) We are the first to explore few-shot/zero-shot inference on TAGs. (b) Our proposed G-Prompt can effectively combine with language models. (c) We provide a new set of experimental results on doing zero-shot and few-shot learning on graphs. We believe our results can inspire a new direction of combining graphs and pre-trained language models, i.e. through incorporating prompts, we can train captured graph information through language model losses. We also believe our work can draw more attention to zero-shot and few-shot inference on graphs, and provide insights on integrating graphs with currently booming large models. Here are the detailed responses to your suggestions: **1. Thank you for the suggestions on our prompts. (Weakness 1)** First, let me explain why we adopted manually designed prompts. Manually designed prompts are the earliest and most universal and simple form of prompts[2]. Combined with human experience, their effectiveness in few-shot and zero-shot scenarios has been validated in many applications. Therefore, as the first work exploring how language models do prompting on TAGs, and how GNNs can be incorporated into this prompting paradigm, **the core of this paper is whether GNNs can combine with prompts, so we adopted hard templates as the verification means.** Second, we explain why we did not use soft-prompts or many language model adapters. Indeed, manually designed prompts require human design and have limitations. However, many papers show that **soft-prompts do not perform well under low sample regimes** [1] in NLP. Also, soft-prompts have high time and memory costs in the era of large models. Therefore, considering effectiveness and efficiency, as well as our focus on few-shot learning, we discussed the soft-prompt version in the paper. Notably, **our method does have the capability to combine with soft-prompts.** A simplest form is directly end-to-end training a soft-prompt on the language model, and then applying G-Prompt based on this soft-prompt. However, how to prompt is not the focus of this work, so we will include this part in our future work. **2. Thank you for the suggestions on our writing. (Weakness 2)** We have redrawn the model figures and thoroughly checked the full text. **3. Thank you for the comments on experiment types. (Weakness 3)** Indeed, currently limited by the diversity of TAGs, our method like most existing ones only tested on node classification. However, the two datasets we collected, Reddit and Instagram, do support meaningful link prediction tasks. But constructing suitable datasets and tasks for link prediction requires time, so we can only promise to report link prediction results and experiments before the final notification. **4. Thank you for the suggestions on our related work. (Weakness 4)** We added graph+prompt works in the related work. However, it should be emphasized that although these works have overlapping titles with ours, e.g. involving graph and prompt, the actual problems explored differ a lot. Graphprompt works to study how to efficiently fine-tune pre-trained GNNs, while our prompt uses PLMs' prompts, exploring how to incorporate graph information into language model prompting. **To avoid confusion, we highlighted this difference when citing these works.** **We hope our responses have addressed your questions. We look forward to further suggestions and feedback from you.** **REFERENCE:** [1] Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. [2] Language Models are Unsupervised Multitask Learners. --- Rebuttal Comment 1.1: Comment: Thanks for the response to my questions. But my main concerns (weakness 1) still hold and thus I'll keep my score.
Summary: The paper presents G-Prompt, a novel framework designed to model Text-attributed Graphs (TAGs) more efficiently. G-Prompt addresses the existing limitations of current methods by combining a graph adapter with task-specific prompts to extract node features, thereby integrating information from both the graph structure and downstream tasks. The graph adapter makes pre-trained language models (PLMs) aware of the graph structure, and the task-specific prompts provide task-related interpretations. Experimental results reveal that G-Prompt outperforms existing methods in few-shot learning and demonstrates robust performance in zero-shot settings. The generated node representations also exhibit high interpretability concerning task performance, indicating the framework's effectiveness in harnessing both graph and task-specific information. Strengths: 1. The paper introduces a framework, G-Prompt, which effectively integrates graph and task-specific information for node feature extraction in text-attributed graphs (TAGs), which addresses a significant gap in existing methods. 2. G-Prompt outperforms state-of-the-art methods on few-shot node classification and performs comparably with fully-supervised baselines in zero-shot settings. Weaknesses: 1. While the G-Prompt framework is innovative, it builds heavily on existing concepts such as PLMs, GNNs, and the use of prompts. The novelty lies more in the combination and application of these techniques rather than in entirely new concepts. 2. The experiments could have been strengthed by utilizing more advanced pre-trained language models, such as ALBERT, DeBERTa, etc., to demonstrate its wide applicability to other PLMs. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Major: Please address the weaknesses. Minor: There are several typos and stylistic issues. Please carefully revise your manuscript. 1. Table 1: what is "# Eeges"? might be 'edges'? 2. Figure 1: Romdom mask -> Random mask 3. line 85: fuction -> function 4. line 176: it's -> its Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors have not explicitly discussed the limitations or potential negative societal impacts of their work in the sections of the paper that were processed. While the authors demonstrate the effectiveness of G-Prompt on few-shot and zero-shot learning scenarios, it's unclear how well the framework generalizes to other types of tasks or datasets. The authors could discuss this potential limitation and provide insights to address it. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First of all, thank you very much for reviewing our paper and for your positive acknowledgment of the problem setting we presented in our paper, as well as your recognition of the effectiveness of our method. Following your suggestions, we have made the following modifications: **1. Conduct experiments on ALBERT[1] and GPT2[2] (Weakness 2):** * We have conducted experiments involving ALBERT and GPT2 **2. Summary out method's innovation (Weakness 1):** * We have added a summary of the method's innovation in the Method section. **3. Writing revision (Question 1):** * We have carefully revised the paper and provided additional descriptions for the relevant experiments. Here is a detailed response to the suggestions you provided: **1. We sincerely appreciate your suggestions regarding utilizing more PLMs (Weakness 2).** We have added related experiments based on ALBERT. DeBERTa was not pre-trained with a mask-token prediction task, so it cannot directly utilize prompts for few-shot zero-shot tasks, which is why it cannot be combined with our method. In addition, we supplemented experiments on GPT2. **The results show our method supports both BERT-like PLMs, and GPT-like PLMs.** This means G-Prompt also has potential to be combined with current state-of-the-art generative large language models (PLMs, Pre-train Language Models). We believe these insights can inspire future work to explore integrating GNNs with open-sourced generative large models (LLAMA2[3], ChatGLM2[4], Qwen[5], etc). **2. We are grateful for recognizing the innovation in G-Prompt and for your suggestions on our framework (Weakness 1).** **Response:** You are right that current G-Prompt involves three components: pretrained language models, prompts, and GNNs. We may have spent too much space introducing GraphAdapter details in the submitted version. So let me re-clarify the core contributions and innovations of G-Prompt: **(a) Problem: Our work is the first to explore how to incorporate graph information into language model prompting**, which is of broad application value (supports few-shot learning and zero-shot inference). **(b) Method: We propose a general algorithm to fuse graph information into language models.** Specifically, we design a GraphAdapter to capture graph information, and then fuse the captured information into PLMs' prompting process through NLP pretraining tasks. To verify the generality of this algorithm and following your suggestion, we added experiments based on ALBERT/GPT2, with consistent conclusions as RoBERTa[6] experiments. G-Prompt supports both BERT-like and generative pretrained models. These results show that the training approach of G-Prompt, also the main innovation of our method, is a generalizable algorithm that can be applied to various language models. **(c) Efficiency: Our method only trains the GraphAdapter, so the whole training only needs one inference pass of the PLMs.** This characteristic has significant advantages over end-to-end training in the current environment where large open-sourced models emerge rapidly. If a stronger language model appears, we only need one inference pass on current TAGss, and can train a new G-Prompt for various downstream tasks using the saved representations. **For example, we supplemented experiments on 2 pretrained language models within the very short rebuttal time using just 2 P100s.** **Specific Modifications:** We appreciate the reviewer's feedback on this problem. In the new version of the paper, we have added a summary of the innovative aspects of our method at the end of the Methods section, which should provide readers with a clearer understanding of the innovative features and design intentions of G-Prompt. **3. We sincerely thank you for your suggestions on our writing (Question 1).** We sincerely apologize that due to our rushed writing, your reading experience was negatively impacted. We have carefully checked the typos and errors you summarized in the questions. We consolidated all issues pointed out by reviewers and thoroughly revised the full text. We modified all parts with unclear expression to make them easier for readers to understand. **We hope our responses have addressed your questions. We look forward to further suggestions and feedback from you.** **REFERENCES**: [1] ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. [2] Language Models are Unsupervised Multitask Learners. [3] Llama 2: Open Foundation and Fine-Tuned Chat Models. [4] GLM: General Language Model Pretraining with Autoregressive Blank Infilling. [5] Introducing Qwen-7B: Open foundation and human-aligned models. [6] RoBERTa: A Robustly Optimized BERT Pretraining Approach. --- Rebuttal 2: Comment: Thanks for your response. I'll keep my original rating.
Rebuttal 1: Rebuttal: To all reviewers, We sincerely appreciate your affirmation of G-Prompt and your insightful suggestions on its current limitations. Guided by your comments, we have conducted extensive additional experiments to supplement this paper, along with revisions to the content. As there are many changes, we provide an overview of the modifications to the paper: **1. Report G-Prompt's performance based on ALBERT and GPT2:** We sincerely appreciate Reviewer CCnd's insightful comments! We conducted experiments to evaluate the effectiveness of G-Prompt based on ALBERT-Large[1] and GPT2-Large[2]. It is noteworthy that there are differences in pretraining tasks between BERT-like models and GPT2. However, the GraphAdapter structure remains unchanged for all models, with only minor modifications to the input and corresponding labels of the loss function. The experiment results demonstrate that: (a) the prompt representation of GPT2 remains effective on TAGs, and (b) GraphAdapter not only works well with BERT-like models but also supports GPT-like models. This further confirms that G-Prompt is a general framework. #### Table 1. The performance of G-Prompt is based on different PLMs. Each row corresponds to a specific method. Every column lists the performance of the methods in a specific PLM of the dataset. The symbol "-" is used for formatting purposes only. Accuracy is used as an evaluation metric for the task in Arxiv, while AUC is used as an evaluation metric for the other two datasets. ||Arxiv|Instagram|Reddit| |:-:|:-:|:-:|:-:| ||ALBERT-L\|RoBERTa-L\|GPT2-L|ALBERT-L\|RoBERTa-L\|GPT2-L|ALBERT-L\| RoBERTa-L\|GPT2-L| |Cls-Embedding|--0.4297-\|--0.5414--\|--N/A--|--0.5407-\|--0.5385--\|--N/A--|--0.5366-\|--0.5236--\|--N/A--| |Prompt-sparse|--0.5466-\|--0.5784--\|0.5580|--0.5511-\|--0.5721--\| 0.5580|--0.5681-\|--0.5761--\|0.5809| |G-Prompt|--**0.5589**-\|--**0.5927**--\|**0.5863**|--**0.5680**-\|--**0.5917**--\|**0.5863**|--**0.6010**-\|--**0.6167**--\|**0.5956**| **2. Add prompt-related ablation experiments:** (Thank reviewers 4Eon and xGC6 for pointing this out) We tested 5 different prompts on each dataset following the classic prompt exploration work [1]. The prompts fall into three categories: task-relevant (the prompt used in this paper), No task information, and irrelevant. The results are shown in Table 3, with a case corresponding prompts of Arxiv listed in Table 4. It can be seen that (1) prompts containing task information perform better than ones without task information, (2) G-Prompt brings significant gains over different prompt representations, (3) G-Prompt's performance correlates with the prompt representation's performance. #### Table 2. The performance of G-Prompt is based on different prompts. Each row corresponds to a specific prompt. Every column lists the performance of the prompt in a specific method of the dataset. The details of the different prompts can be found in Table 3. The design of evaluation metrics for different datasets is consistent with Table 1. |||Arxiv|Instagram|Reddit| |:-:|:-:|:-:|:-:|:-:| |Cate|Prompt id|RoBERTa-L\|+G-Prompt|RoBERTa-L\|+G-Prompt|RoBERTa-L\| +G-Prompt| |Task specific.|0|0.5784\|0.5927|0.5721\|0.5833|0.5761\|0.6167| |No task information|1|0.5485\|0.5854|0.5522\|0.5686|0.5516\|0.5895| ||2|0.5648\|0.5868|0.5504\|0.5710|0.5665\|0.5861| ||3|0.4944\|0.5794|0.5587\|0.5804|0.5552\|0.5919| |irrelevant|4|0.5550\|0.5902|0.5444\|0.5686|0.5546\|0.5853| #### Table 3. Details of the different prompts on different datasets. **[MASK]** represents the masked token. All prompts are added before text features([text]) of the node. |||Dataset| |:-:|:-:|:-:| |Cate|Prompt id|Arxiv| |Task specific.|0|This is a paper published on the **[MASK]** subject of Arxiv, its abstract is: [text]| |No task information|1|This is a **[MASK]** paper. [text]| ||2|This is **[MASK]**. [text]| ||3| **[MASK]**. [text]| |irrelevant |4| My favorite fruit is **[MASK]**. [text]| **3. Added experiments on flattening graphs.** Following reviewer Fbx8's suggestions and our paper's approach. We explored whether graph information can be directly incorporated into PLMs for prompting by flattening the graph. We have defined an "edge-flatten" approach. Specifically, this method first concatenates the text of two nodes directly along edges as input into the language model, i.e. predicting based on its own information and one neighbor's. Then predictions are aggregated edge-by-edge from neighbors. Due to the character limit of language models, this method inevitably needs to truncate texts. For a controlled experiment, we added "Prompt-sparse\*", which has the same truncation as "edge-flatten". The results are shown in Table 4. More details can be found in our response to reviewer Fbx8. #### Table 4. The performance of different methods on three datasets. Each row corresponds to a specific method. Every column lists the performance of a specific method of the dataset. The design of evaluation metrics for different datasets is consistent with Table 1. |Dataset|Arxiv|Instagram|Reddit| |:--:|:-:|:-:|:-:| |Prompt-sparse|0.5784|0.5721|0.5761| |Prompt-sparse*|*0.5806*|0.5493|0.5577| |Prompt-Flatten|0.5731 (sample) |*0.5815* |*0.5844*| |G-Prompt|**0.5927**|**0.5917**|**0.6167**| #### Table 5. Time efficiency analysis on the Arxiv dataset. Each row corresponds to a specific method. Every column lists the time in a specific method of the dataset. ||SSL stage|Downsteam task| |:-:|:-:|:-:| |Methods|s/epoch|min| |GLEM|N/A| 255| |GIANT| 1818.52|1.2| |G-Prompt |1039.01|152| **REFERENCES**: [1] ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. [2] Language Models are Unsupervised Multitask Learners. [3] RoBERTa: A Robustly Optimized BERT Pretraining Approach. [4] Learning on Large-scale Text-attributed Graphs via Variational Inference. [5] Node Feature Extraction by Self-supervised Multi-scale Neighborhood Prediction.
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper introduces a new framework called G-Prompt for analyzing text-attributed graphs, which are commonly found in real-world networks. The existing methods for analyzing these graphs have limitations in improving performance when there is limited training data. G-Prompt addresses this issue by combining a graph adapter and task-specific prompts to extract better features from the graph. The experimental results show that G-Prompt outperforms existing methods in classifying nodes with limited training data. It also provides more understandable results and performs comparably to fully-supervised approaches in scenarios where no training data is available. Strengths: 1. The paper addresses an interesting problem setting of learning on text-attributed graphs (TAGs). This problem setting has practical implications and contributes to advancing the understanding of graph-based machine learning. 2. The proposed G-Prompt framework demonstrates good performance compared to baseline methods. It outperforms existing state-of-the-art approaches in node classification tasks, particularly in few-shot learning scenarios. Weaknesses: 1. The proposed method lacks sufficient novelty as it combines existing techniques of graph adapters and prompt-based embedding learning, resulting in limited technical innovation. 2. The ablation study of different modules in the proposed framework is lacking. The authors do not sufficiently analyze the individual contributions and impact of each component. In other words, how do the prompting and the graph adapter raise the performance individually? 3. The description of baselines is unclear as there are no citations or explanations provided. It is unclear what the baseline GAE and GIANT refer to, and this lack of clarity hinders the reader's understanding and evaluation of the proposed method. 4. The paper contains several typographical errors, such as the misspelling of "GIANT" as "GAINT" in line 242 and the misspelling of "random" as "rondom" in Figure 1. These errors suggest a need for thorough proofreading. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. The authors are doing classification tasks on all three datasets, however, they apply different evaluation metrics to them (ACC for Arxiv and ROC for Instagram and Reddit). Is there a reason for doing so? It's better to have some discussion here. 2. Could the authors further explain the motivation of the GraphAdapter? How it becomes context-friendly and prompting-friendly? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: No, didn't see any limitations addressed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First of all, we sincerely appreciate your careful reading of our paper, your positive affirmation of our research questions, and your recognition of the effectiveness of our proposed methods. The summary of our response is as follows: 1. **GraphAdapter Design (Question 2):** - We have reorganized and expressed the motivation behind GraphAdapter more clearly. - We have meticulously revised the explanation of "Context-friendly" and "Prompting-friendly" in Section 3.2 to enhance clarity. 2. **Suggestions for Method Novelty(Weakness 1):** - In terms of writing, we have emphasized the core contributions of our method in both the introduction and the model section and we have highlighted the novelty of our approach compared to existing methods. - Experimentally, we have incorporated experiments involving ALBERT[1]/GPT-2[2] to demonstrate the generalizability of our framework. 3. **Suggestions for Ablation Experiment(Weakness 2):** - In writing, we have rephrased the introduction to the ablation experiments in Section 4.3, providing a more comprehensive analysis of the results. Additionally, we've made it clearer in the main experiment table how it corresponds to the respective ablation experiments. - Experiment-wise, we have included performance results for prompt representations under different prompts, as well as results after G-Prompt processing. 4. **Writing Suggestions (Weakness 3 & Weakness 4):** - We have consolidated all the issues pointed out by the reviewer and thoroughly revised the entire manuscript. - We have added references to relevant baselines and explanations to enhance the clarity of our descriptions. - We have made necessary modifications to any unclear expressions to ensure better reader comprehension. 5. **Dataset Question (Question 1):** - We have updated the dataset description to include the rationale for using ROC-AUC evaluation on the Reddit and Instagram (ins) datasets. Here is a detailed response: **1. Thanks for your question about the motivation behind GraphAdapter.** **We design GraphAdapter with the following motivations:** (a) to enable few-shot/zero-shot learning using prompts, (b) to allow PLMs(Pre-trained Language Models) to utilize graph information during the prompt process. To achieve this, GraphAdapter requires to utilize graph information and combine it with prompt outputs. We propose using unlabeled text data present in TAGs(Text-Attributed Graphs) to train GraphAdapter using the language model's loss function. However, during training, GraphAdapter only sees existing TAGs' text, while during prompt node representation, the input to GraphAdapter from prompts is unseen. To address this, we need to **(a) preserve the language model's contextual understanding** (so it can interpret the added prompts), and **(b) generalize the learned graph information to unknown words, as prompt content might not appear in existing text data.** Thus, "context-friendly" refers to not disrupting the original contextual modeling of the language model. We achieve this by placing GraphAdapter after the last layer of the transformer. "Prompting-friendly" means avoiding learning token-specific content. For instance, if PLM predicts a masked token is "apple", but its' label is "orange", a direct linear transformation might learn a mapping from "any word" to "orange," leading to overfitting. Therefore, we train on "apple" and its "neighbor influence," determined solely by graph and neighbor features. This way, prompt predictions include both PLM's predictions and neighbor influence. **2. Thanks for your suggestion regarding the novelty of G-Prompt:** We apologize that due to our writing, the method's contribution may have been overly focused on network structure. The motivation for our GraphAdapter can be found in the previous answer. Indeed, G-Prompt combines currently popular techniques. But we want to emphasize that **we are the first to propose training GNNs(Graph Neural Networks) as an adapter through pre-training tasks of language models.** Specifically, our innovations are: (a) How to incorporate graph information into the PLMs' prompting process itself has not been explored in previous works. (b) Our method is a general method that can be combined with various PLMs. To further demonstrate this, we added experiments based on ALBERT and GPT2 during the rebuttal period (See Table 2). The results show that G-Prompt can further improve the few-shot capabilities of prompt representations. (c) G-Prompt can accomplish what previous graph methods have struggled with - zero-shot inference on graphs. We believe these insights about training can inspire more work to try combining PLMs and GNNs to explore few-shot/zero-shot learning on graphs. **3. Thank you for your valuable feedback on our ablation experiments:** We conduct additional experiments using multiple prompts to explore the relationship between G-Prompt and prompts. The results are presented in Table 3 while prompt details are in Table 4. These experiments show that task-related information improves prompt representations in downstream tasks. Importantly, G-Prompt's representations correlate positively with original prompt representations and consistently outperform them. This result indicates that G-Prompt is robust for prompts **4. Thank you for your suggestion of our writing:** We have thoroughly revised our paper following your suggestion, and we have also carefully revised the whole paper. **5. We appreciate your question about our dataset.** We choose ROC-AUC as the main evaluation metric for the Instagram and Reddit datasets due to their imbalanced positive-to-negative sample ratio. **We hope our responses have addressed your questions. We look forward to further suggestions and feedback from you.** REFERENCES: [1] ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. [2] Language Models are Unsupervised Multitask Learners. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttal. I'll keep my original score.
null
null
null
null
null
null
On kernel-based statistical learning theory in the mean field limit
Accept (poster)
Summary: The authors explore the mean field limit of kernels and their reproducing kernel Hilbert spaces, providing novel theoretical tools and insights for tackling large-scale problems. Furthermore, they employ these kernels in statistical learning, with a particular focus on Support Vector Machines. This new form of limit for learning problems has not been investigated before in the statistical learning theory literature, making the authors' research a valuable contribution to the field. Strengths: 1. The authors provide a comprehensive investigation of the mean field limit of kernels and their reproducing kernel Hilbert spaces. 2. The research extends the basic theory of mean field kernels, providing novel approximation results and a variant of the representer theorem for mean field kernels. 3. The authors introduce a new form of limit for learning problems and provide convergence results for Support Vector Machines using mean field kernels. Weaknesses: 1. The limitations or drawbacks of the research are not explicitly stated. 2. Section 4 only provides a consistency result instead of a generalization error bound. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What is the main improvement of this paper compared to the reference [15]? 2. What is the relationship and difference of this study compared to neural tangent kernel? What is the advantage of mean field kernels? 3. Any concrete examples for mean field kernels? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your review and insightful questions. Below we address your concerns and answer the posed questions. **Weaknesses** >Section 4 only provides a consistency result instead of a generalization error bound. Indeed, we only consider convergence per se in Section 4 and do not provide rates or generalization bounds. These latter aspects are very interesting and relevant, and subject ongoing investigations. However, we would argue that establishing consistency is already interesting and challenging, which motivated us to focus on this in the present work. **Questions** >What is the main improvement of this paper compared to the reference [15]? Reference [15] does not consider mean field limits of kernels in the context of statistical learning theory, so the focus of this work and [15] is rather different. The contributions of this work, in particular in comparison with [15], are as follows 1. Extending and completing the theory of mean field limits of kernels and RKHS * Theorem 2.3.2: We show that the limiting function (existence has been established in [15, Corollary 4.3]) is actually in the RKHS of the mean field limit kernel, and shares the same RKHS norm bound. On the one hand, this result is necessary for the proofs in Sections 3 and 4. On the other hand, it completes the commutative diagram in Figure 1, which was already suggested in [15], but an important link was missing. Note that the proof of this result requires considerably more sophisticated tools than those used in [15]. * $\Gamma$-convergence inequalities from Lemma 2.4, 2.5: These results are necessary for the proofs in Sections 3 and 4, and make the theory of mean field limits of kernels much easier to handle. The idea to use $\Gamma$-convergence in the present situation and establish the necessary inequalities is a contribution of this work. Interestingly, it seems that this is also the first time that $\Gamma$-convergence arguments have been used in the context of reproducing kernels. * Theorem 2.3.1: In contrast to the results in [15], we do not need to use a further subsequence. On the one hand, this is necessary for the $\Gamma$-convergence results, on the other hand, it completes the commutative diagram from Figure 1. 2. Approximation and mean field limits of kernels * Proposition 3.1, Remark 3.2: The approximation capabilities of mean field limit kernels have not been investigated in [15]. * Theorem 3.3: In many cases, numerical implementations of kernel methods rely on various forms of the representer theorem, and such a result in the mean field limit is completely new. Furthermore, it seems that this is also the first representer theorem that considers the situation of limits of kernels. 3. Statistical learning theory and mean field limit kernels * Statistical learning theory setup in the mean field limit: The formalization of such a mean field limit, including the existence result in Proposition 2.1 and the notion of mean field convergence of probability distributions, is new (and nontrivial). Furthermore, it appears that this is also the first instance of a limit of statistical learning problems that has been investigated. * Convergence results for SVMs and their risks (Section 4): All of these results (and their setting) are new. Note that we also need new and more sophisticated techniques compared to [15] to prove these results. >What is the relationship and difference of this study compared to neural tangent kernel? What is the advantage of mean field kernels? This is an excellent question. Mean field limit kernels and the neural tangent kernel (NTK) are very different objects, though both arise through a large scale limit (i.e., the number $M$ of entities going to infinity). Some of the technical differences are * Mean field limit kernels arise as the limit of kernels, whereas the NTK is a way to connect neural networks to kernel methods * The overall input space stays the same for the NTK (the number of hidden units goes to infinity), whereas the input space changes for mean field limit kernels * The convergence notions are different (NTK usually involves convergence in probability, we consider mean field convergence results) It is difficult to say whether the NTK or mean field limit kernels are "better" , since these objects are very different. Using mean field limit kernels in the context of neural network theory could be interesting, but we have not investigated this question. The main motivation for mean field limit kernels (and their practical strength) is their usage in the context of kinetic theory, as outlined in the introduction (see also the general response above for a more detailed description of our motivation). >Any concrete examples for mean field kernels? In this work, we focus on the general theory, which we consider an advantage - all of our results are immediately applicable to any sequence of appropriate kernels with a mean field limit. Large concrete classes of mean field limit kernels (more precisely, kernel sequences that have a mean field limit kernel) have been in introduced in [15], in particular, * Kernels from the well-known pull-back construction * Double-sum kernels Since this work focuses on the theory of mean field limit kernels in statistical learning (which required extending the general theory of these kernels, cf. Section 2, and dealing with approximation questions, cf. Section 3), we decided to not include this aspect here. Actually, we recently derived an analytical form of a mean field kernel, which we use for numerical experiments in the context of interacting particle systems. However, we found that including this (and the relevant background from kinetic theory) would deteriorate the quality of the submission (since it dilutes the focus).
Summary: This paper studies the theory of RKHS consisting of the functions over the space of probability distributions. Complementing the related study [15], the work develops the particle-based approximation theory to RKHS and develops the support vector machines fed distributions as inputs. Moreover, the statistical consistency is also proven. Strengths: The paper organizes well the theory of RKHS defined over the space of probability distributions. The results seem technically correct. There are potentially many applications in machine learning. Hence, this work can be of interest to the machine learning community. Weaknesses: - While I recognize many potential applications, it would be nice if the authors could provide concrete examples and datasets to emphasize the importance of the paper. - The usefulness and computational complexity of the learning setups studied in the paper are somewhat unclear because of the lack of specific examples and experiments. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Readers might be interested in the particle complexities as $M\to\infty$ because the large number of particles can affect the computational complexity. How large is $M$ needed to achieve a given accuracy in a certain sense? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: It is somewhat unclear how many applications can exist. It would be nice if some applications were mentioned. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your insightful review. We now address your concerns and answer the posed questions. **Weaknesses** >While I recognize many potential applications, it would be nice if the authors could provide concrete examples and datasets to emphasize the importance of the paper. We have decided to focus on the theory in order to avoid overloading the submission (presenting the theory properly with all relevant details already takes up a lot of space). Furthermore, we would like to stress that beyond concrete applications, the theory itself appears to be interesting and relevant for the machine learning community. * Kernels and RKHSs are important and well-studied, but the type of kernel limits we consider (motivated by concrete applications) is new * To the best of our knowledge, the limit of statistical learning problems as introduced and investigated in Section 4 is new (even the notion of such a limit setup) * We connect kinetic theory (MFL, $\Gamma$-convergence) with the theory of reproducing kernels Finally, regarding concrete examples and datasets, see the paragraph below. >The usefulness and computational complexity of the learning setups studied in the paper are somewhat unclear because of the lack of specific examples and experiments. The focus of this submission is on the kernels / RKHSs / statistical learning problems / SVMs in the mean field limit, investigating and discussing computational complexity and practical issues is unfortunately beyond the scope of this submission. This approach is similar to past NeurIPS papers on related topics, for example [12]. We agree that these are relevant issues, but in our opinion, they should be carefully treated in a separate work. **Questions** >Readers might be interested in the particle complexities as $M\rightarrow\infty$ because the large number of particles can affect the computational complexity. How large is $M$ needed to achieve a given accuracy in a certain sense? This is an excellent and important question. Regarding the computational complexity, when working with the limit objects directly (e.g., an SVM with a MFL kernel), a large $M$ is not a problem -- after all, this is one of the main motivations of this approach. This is completely analogous to the situation of kinetic PDEs, where only the state dimension (corresponding to our dimension $X$), but not the number of particles (here $M$) plays a role. Regarding the question of accuracy, we indeed do not provide rates of convergence in the mean field limit. To the best of our knowledge, such rates are readily available even in classic applications of kinetic theory, and hence our setup inherits this feature. Borrowing statistical terminology, our results are about consistency, not rates. However, we argue that already the convergence per se (i.e., consistency) is interesting, relevant, and helpful (since it justifies a kinetic theory approach for machine learning methods in this setting and might lead to more efficient numerical methods applicable on the equivalent mesoscopic level). Furthermore, note that having rates w.r.t. $M$ might not be very consequential in practice, since the $M$ is determined by the problem (e.g., the size of the population of interaction particles), and cannot be chosen by the learner (in contrast to the data set size $N$, which often can be increased by simply collecting more data). Nevertheless, providing rates for the mean field limit convergence is a very interesting question that should be investigated in future work. >It is somewhat unclear how many applications can exist. It would be nice if some applications were mentioned. Due to space constraints, and our focus on theoretical questions, we have not been able to discuss applications in detail. However, we are motivated by concrete learning problems, as we outlined in the introduction. Since the question of applications/learning scenarios was raised by several reviewers, we have elaborated on this in the general author's answer above. --- Rebuttal Comment 1.1: Title: After reading the rebuttal Comment: Thanks for the response. A main concern raised by reviewers was the applicability of the theory to real applications. And the authors have adequately addressed this concern and I have been convinced of its significance. Thus, I would like to raise the score: 4 -> 5.
Summary: This paper develops mathematically rigorous construction of the mean field limit of kernels of probability measures, which is obtained as a limit of a sequence of kernels with increasing input dimensions, and its application to SVMs. This is motivated by the analysis of interacting particle systems, when many observations of particles are available and we want to consider a function governing the dynamic as a function of a probability measure rather than a finite-dimensional vector. Specifically, this paper first confirms the validity of such kernels by extending the existing results: the existence of the mean field limit of functions and kernels, and relations between a sequence of kernels and the mean-field limit of kernels. Then, they prove the representer theorem (a mean-field limit of a sequence of functions uniformly approximated by each corresponding RKHS functions can also be approximated by the mean-field limit of RKHS functions) and convergence of the minimizers. Finally, the results are applied to an L2-regularized (with $\lambda \geq 0$) loss minimization problem over RKHS. They show the existence and uniqueness of the solution (when $\lambda > 0$) and convergence of the loss value. Strengths: ### Problem settings are well-motivated The purpose of this paper is to develop a foundation of the theory of kernels with probability measures as input in order to view the state of an interacting particle system not as observations of a finite number of particles, but as a function of a probability measure that represents the distribution of the particles. The problem of learning a function with infinite dimensional input may have many other applications, and the potential impact in this area is of a certain significance. ### Rigorous theory for infinite-dimensional input functions This paper investigates the formulation and thus the basic properties of RKHS for the limit of a sequence of kernels of increasing dimension. However, it should be noted that the main focus is basically on how to define the limit, and not on how to use it specifically. It can be said that this research has a certain novelty if there is no similar discussion. Weaknesses: ### Misleading title It looks like a paper about generalization error analysis for the community of mean-field neural network optimization. Therefore, I strongly recommend that the authors change the title to characterize their contents more precisely. ### Missing discussion on whether the existing analysis on learning infinite-dimensional input functions can be applicable. The final part of this paper is essentially a problem of learning functions with infinite-dimensional inputs, but the discussion on the previous works / applications is missed. The authors should make a remark on whether such existing theories are applicable and hopefully conduct some experiments to verify their theory. An example of previous work I think is: F. Ferraty, A. Mas, and P. Vieu. Nonparametric regression on functional data: inference and practical aspects. Australian & New Zealand Journal of Statistics, 49(3):267–286, 2007. ### The authors say that the first part of the paper is improvement from an under review paper. However, I cannot know the difference and whether such improvement is necessary for the rest part. This paper consists of the three part. (i) Definitions of functions and kernels in the mean-field limit, (ii) approximation ability and convergence of the minimizers, and (iii) application to SVMs based on (ii). The authors argue that (i) is improvement from an under review paper. However, I cannot see the difference and whether such improvement is necessary for deriving (ii) and (iii). If not, this paper looks like an assortment of different results. Also from the perspective of preventing double submission, I think the authors should clarify the difference between their results in (i) and whether such an improvement is necessary. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Is it possible to add any experiment to the paper? - Could you tell me more about the difference between Prop. 2.1 and the previous work, and whether such improvement is necessary for the rest part? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: In order to avoid confusion with optimization and generalization theory of mean-field neural networks, the authors are encouraged to change the title. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your very detailed and careful review. Below we address your concerns and answer the posed questions. **Weaknesses** >Misleading title [...] Thank you for pointing out this risk of confusion, which we weren't aware of. To avoid any possible confusion, we suggest to change the title to "On kernel-based statistical learning in the mean field limit". We would argue that the term "mean field limit" (MFL) should remain in the title, since this is the accepted term of the considered setting (cf. [7]). >Missing discussion on whether the existing analysis on learning infinite-dimensional input functions can be applicable. While functional data analysis (FDA), as considered in Ferraty et al, also deals with infinite-dimensional data, the crucial difference is that the infinite-dimensional data here arises as a particular limit (the MFL) of finite-dimensional data, and we are interested in this limit (and its consequences for kernels / kernel-based methods). Furthermore, the typical data in FDA is rather different ($L^2$ functions, often on $[0,1]$). The situation is similar with kernels for infinite-dimensional inputs: Of course, after going to the limit, we are in this realm (as mentioned in the Introduction, cf. [12]), but this work is about the limits themselves (definition, existence, properties) and the repercussions/chances for kernel methods/theory. We will add a corresponding discussion to the Introduction to make this aspect clearer. >The authors say that the first part of the paper is improvement from an under review paper. The paper [15] has been accepted and published in the meanwhile (doi: 10.3934/krm.2023010), we will update the bibliography accordingly. >This paper consists of the three part. [...] The main contributions in part (i) (Section 2) are 1. The MFL function in Theorem 2.2 (existence was already established in [15, Corollary 4.3]) is contained in the MFL kernel RKHS, and shares the same RKHS norm bound. 2. Lemma 2.4, 2.5, necessary for the $\Gamma$-convergence arguments used in Sections 3 and 4. These results are novel (in particular, not contained in [15]), and the corresponding proofs are technically demanding and use much more sophisticated arguments than those utilized in [15]. Furthermore, these results are necessary for most of the proofs of the results in parts (ii), (iii) (i.e., Sections 3 and 4). Additional contributions in part (i) are 1. Proposition 2.1, which is necessary for the statistical learning theory setup in Section 4, cf. the Questions paragraph below. 2. Avoid going to a subsequence in Theorem 2.1. On the one hand, this is necessary for the $\Gamma$-convergence arguments used later on, on the other hand, this makes the commutative diagram (Fig 1) and its interpretation nicer. We would like to emphasize that this submission is building on [15], but nothing claimed as a contribution of this submission is contained already in [15]. **Questions** >Is it possible to add any experiment to the paper? We decided not to do so in order to keep the manuscript focused on the theoretical aspects (which take up already a significant amount of space). However, we are working on numerical experiments and empirical evaluations of the concepts introduced here. In our opinion, including such experiments (and the required background on interacting particle systems, kinetic equations, Monte Carlo methods, and numerical methods for kinetic PDEs) would severely degrade the quality of the presentation and make the submission less readable. >Could you tell me more about the difference between Prop. 2.1 and the previous work, and whether such improvement is necessary for the rest part? Proposition 2.1 is a generalization of a well-known result (cf. [6, Lemma 1.2]). More precisely, Prop 2.1 allows sequences of functions with a non-compact input domain, and existing results like [6, Lemma 1.2] only work for compact input domains. This extension is necessary because in general the third argument of the loss functions in Section 4 cannot be restricted to a compact set. Typically, a loss function $\ell_M$ is used as $\ell_M(\vec x_M, y, f_M(\vec x_M))$ for some input $\vec x_M$, output $y$, and hypothesis $f_M\in H_M$ (using the notation from the submission). However, in the regularized optimization approaches over $H_M$ as considered here, like the minimization problem $$ \min_{f_M\in H_M} \mathcal{R}_{\ell_M,D_N^{[M]}}(f_M) + \|f_M\|_M^2, $$ all $f_M\in H_M$ are allowed. This means that in $\ell_M(\vec x_M, y, f_M(\vec x_M))$, any $f_M\in H_M$ could appear, and since even for a fixed $\vec x_M\in X^M$ we hence cannot restrict the range of $f_M(\vec x_M)$ a priori, we cannot restrict the third argument of the loss functions to a compact subset of the real numbers. Note that clipping techniques frequently employed in kernel-based statistical learning theory (cf. [26, Def. 2.22]) unfortunately do not work here. Since we need the MFL of the loss functions $\ell_M$ for our setup, and since existing MFL results like [6, Lemma 1.2] consider only functions with compact input sets, we needed Prop 2.1. --- Rebuttal Comment 1.1: Comment: Dear authors, Thanks for the clarification. I consider the paper technically solid. I decided to hold my score. Best regards,
Summary: This work derives mean-field limits of kernels and their associated Reproducing Kernel Hilbert Spaces. In particular, the authors quantify the relationship between the finite-input RKHS and the RKHS of the mean-field kernel, derive a Representer Theorem for mean-field kernels, and provide asymptotic convergence guarantees for mean-field kernel SVMs. Strengths: The results are interesting and well-presented. The authors present a complete treatment of the mean-field limit for kernel methods. The proofs are clear and well-written. Weaknesses: 1. While the results are interesting on its own, I am a little skeptical about the applicability of these results in existing ML problems. I think this work would greatly benefit from a more detailed discussion on the motivation and applications. On a related note, I also encourage the authors to present concrete examples of mean-field kernels for ease of exposition. 2. The complete absence of rates of convergence to the mean-field limit (even for specific kernel classes) is somewhat unsatisfactory. I would appreciate it if the authors could comment on that. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See above. Overall, the results are quite interesting and a bit of work on the motivation and overall presentation would greatly improve the paper. I would be happy to increase my score if the authors are able to address said concerns. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Limitations are adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your review and insightful questions. **Weaknesses / Questions** >1. While the results are interesting on its own, I am a little skeptical about the applicability of these results in existing ML problems. I think this work would greatly benefit from a more detailed discussion on the motivation and applications. On a related note, I also encourage the authors to present concrete examples of mean-field kernels for ease of exposition. We agree that the motivation and the discussion of practical aspects are rather terse, but we wanted to properly present the theory. In the general response above, we have detailed a concrete learning scenario that motivated the present work, on which we want to apply the theory developed here. We will incorporate this into the Introduction, to make the motivation and application scenario clearer. Regarding examples of mean field kernels: Several general classes of these objects have been described in [15]. Moreover, recently a concrete analytical form of a mean field kernel has been derived which is used for ongoing numerical investigations. However, we believe that introducing and investigating such a concrete instance would not fit the scope of the present work. Actually, we would argue that the abstraction here is a major strength of the work, since all of our results apply to all mean field kernels as introduced in [15], just as the developments in Section 4 are valid for all sequences of loss functions compatible with Proposition 2.1 >2. The complete absence of rates of convergence to the mean-field limit (even for specific kernel classes) is somewhat unsatisfactory. I would appreciate it if the authors could comment on that. This is an excellent point. In this work, we focus entirely on convergence per se, and not investigating rates of convergence in the mean field limit. On the one hand, convergence itself is already an interesting and challenging problem, and also helpful from a practical perspective (justifying working on a mesoscopic level). On the other hand, having rates w.r.t. $M$ would be interesting for practitioners (having a rule of thumb when $M$ is large enough to confidently rely on a kinetic approximation) and also in connection with learning rates (e.g., considering the simultaneous limit $M,N\rightarrow\infty$). However, getting such rates is non-trivial and to the best of our knowledge usually not readily available even in classic applications of kinetic theory. We therefore considered this (very interesting) question beyond the scope of the present work. Ensuring rates w.r.t. $M$ is actually subject to ongoing work, and we conjecture that it requires additional regularity conditions on the problem (which would also be contrary to our goal here of being very general). --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. Considering the absence of convergence rates to the mean-field limit (even under additional regularity assumptions), I would like to keep my score.
Rebuttal 1: Rebuttal: We thank the reviewers for their careful, detailed reviews and their interesting and insightful questions. We have answered each reviewer separately, and hope to have addressed all of their concerns and questions. In the general part, we would like to address an aspect that was raised by more than one reviewer, namely the motivation and application scenario of the present work. First of all, we would like to emphasize that the focus of this submission is on theoretical investigations, and in our opinion, the theory itself (beyond specific applications) is interesting and relevant for the machine learning community. Nevertheless, we were motivated by concrete learning problems (as outlined in the Introduction of the submission), which we consider relevant and interesting, and for which the developments presented here will be helpful in our opinion: Consider e.g. complex interacting particle systems (IPS) / multiagent systems (MAS), that are difficult to model using first principles. Important examples include (see for instance [22]) * Animal populations (swarms of birds, schools of fish, colonies of microorganisms) * Human crowds (pedestrian movement, gathering at large events like football games or concerts) * Vehicular traffic (in particular, traffic jams) * Markets (where the agents can be human or legal entities) * Opinion dynamics (inside a society), like in [27] Frequently the state of such a system/population can be easily measured, e.g., by * Video recordings or image snapshots of bird swarms/schools of fish, microscopy recordings of microorganism colonies * Aerial imaging of human crowds (e.g., via quadcopters) * Polling and social media analysis for opinion dynamics However, some interesting features of the whole system might be more difficult to measure. For example, * How a swarm of birds or a school of fish will react to an external stimulus (like an approaching predator), given the current state of the population, e.g., whether there will be a change in the density/spread of the population or change in mean velocity. * Features of a society in opinion dynamics (average happiness, aggression potential, susceptibility to adversarial interventions), given the current "opinion state" Measuring such features can be difficult, for example, due to a required intervention. Formally, such a feature is a functional of the current state of the system, and since the state is often easy to measure, it would be useful to have an explicit mapping from state to feature of interest. However, since first principles modeling is unlikely to be successful in the domains considered here, it is promising to learn such a mapping from data (see the Introduction for a formalization). Since this is a standard supervised learning problem (state snapshots as input, noisy measurement of the feature of interest at the state snapshot), it is immediately amenable to kernel methods. Many of the systems considered above can consist of a large number of agents (thousands of birds in a swarm, millions of microorganisms in a colony), and working on the microscopic or individual level becomes impractical. As done in kinetic theory, it might be beneficial to work on the mesoscopic level, considering only the density of particles/agents on state space. For example, when investigating large swarms of birds, the current density of agents over the state space can be easily measured using commodity imaging equipment, whereas tracking individual birds would require high-resolution imaging and working with very high-dimensional data. Formally, going from the microscopic to the mesoscopic level corresponds to letting the number of particles/agents $M$ tend to infinity, and this can be made rigorous using for example the mean field limit. Based on results and experiences from kinetic theory (e.g., [7]), it is reasonable to assume that our feature of interest (a functional on the state space) has a mean field limit as well (see the Introduction and (1)). Returning to the learning problem, we now have a data set on the mesoscopic level, so the input data (state snapshots) are now probability distributions over the state space. Using kernel methods, we now lift kernels to probability distributions over kernel spaces, cf. [12]. Now, all methods and results on the mesoscopic level should be connected to the microsopic level through the mean field limit, and this leads exactly to the problems investigated in this submission. In practice, we would apply the methods on the mesoscopic level to data originating from the microscopic level (though with large $M$). The various convergence results in Section 4 then ensure that the learning outcome on the mesoscopic level (here, the learned functional of the state) is an approximation to the result on the microscopic level for large $M$. This mimics the situation in kinetic theory, where kinetic PDEs (and associated numerical methods) are used for finite, but large $M$ systems. We hope that we could make this motivation clearer, and we will adapt the Introduction accordingly.
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this work, the authors considered learning problems where the input is a interacting-particles / multi-agent system and studied the mean-field limit of the kernels, RKHS functions and SVM solutions as the number of particles tends to infinity. The authors showed that, essentially, taking the infinite-particle limit is exchangeable with 1) obtaining the RKHS associated with a kernel, 2) obtaining the solution of a variational problem over the RKHS, and 3) obtaining the "SVM solution" within the RKHS. Strengths: This work continues the exploration by Fiedler et al. (2023) into the mean-field limit of RKHS when we consider interacting-particles systems with a growing number of particles. Compared to Fiedler et al. (2023), the current work extends the theory by 1) tightening the theoretical result on the commutative relation between taking the infinite-particle limit and obtaining the RKHS (no longer requiring taking another subsequence & controlling the RKHS norm of the limiting function, as reported by the authors on Page 4), and 2) expanding the discussion into the mean-field limit of minimization problems in the RKHS. The proofs of these results appear to involve rigorous techniques from functional analysis and gamma convergence, though I was unable to fully verify their correctness. Weaknesses: While the infinite-particle limit of the learning problem leads us to interesting theoretical investigations, its practical relevance remains to be seen. Overall, the analysis is carried out at a rather general and abstract level, which is perfectly fine for establishing the theory but makes the motivation harder to interpret by the reader. It is therefore not clear to me whether the work could be of great interest to the NeurIPS audience. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: I don't fully understand why the type of functions considered in Section 4 should be called SVM solutions - there is no support vector involved after all, and instead they seem closer to the solutions to ridge regression / ERM with Tikhonov regularization. The loss function $l_M$ defined on Line 265, Page 7 is a bit non-standard: it also takes the input vector $\vec{x}$ as an argument. Since it complicates things somewhat, it could be helpful to explain in what scenarios this term is relevant. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your careful and detailed review. Below we address your concerns, answer the posed questions, and provide additional comments and remarks. **Weaknesses** >While the infinite-particle limit of the learning problem leads us to interesting theoretical investigations, its practical relevance remains to be seen. Overall, the analysis is carried out at a rather general and abstract level, which is perfectly fine for establishing the theory but makes the motivation harder to interpret by the reader. It is therefore not clear to me whether the work could be of great interest to the NeurIPS audience. We fully agree that the present work focuses on theoretical aspects, which might pose a challenge to make the motivation and practical relevance clear. However, we decided to use this approach in order to avoid overloading the submission and keep it focused on the pertinent questions. Machine learning for interacting particle / multiagent systems has become a thriving field in the last years (cf. [21] and [R1]), and the challenges of very large-scale systems (corresponding to our $M\rightarrow\infty$ limit) in this context have been barely tackled so far. We are confident that the theoretical foundations we provide will be helpful in this area. In fact, we are currently working on empirical investigations of mean field limit kernels in the context of interacting particle systems where the computational benefit of treating the mean field case will become apparent. However, including these aspects and the necessary background (from interacting particle systems, kinetic equations, numerical methods for kinetic PDEs) is the beyond the scope of this submission, and we think it would actually deteriorate the quality of presention. Furthermore, as briefly mentioned in the Introduction, large-scale limits have become very popular in machine learning in different contexts, and we think that our theoretical results are an interesting contribution to this body of work, providing a different perspective. **Questions** >I don't fully understand why the type of functions considered in Section 4 should be called SVM solutions - there is no support vector involved after all, and instead they seem closer to the solutions to ridge regression / ERM with Tikhonov regularization. We agree that in the rather general setting that we consider, the solutions $f^\ast_{\mathcal{D}_N,\lambda}$, $f^\ast_{P,\lambda}$ etc., do not necessarily correspond to "classic" SVMs. However, we use the terminology from [26], and decided to follow their convention, cf. the text immediately after (5.8) and before Theorem 5.3 in this reference. This terminology has also been used in other NeurIPS submissions in the past, e.g., [R2]. However, to avoid confusion, we will add some qualifications to it, i.e., that our infinite-sample / empirical SVM solutions correspond to (Tikhonov) regularized ERM solutions over RKHSs. >The loss function $\ell_M$ defined on Line 265, Page 7 is a bit non-standard: it also takes the input vector $\vec x$ as an argument. Since it complicates things somewhat, it could be helpful to explain in what scenarios this term is relevant. We are following the rather general setup of [26, Chp 2]. A common special case is a supervised loss function as defined in [26, Def. 2.7], where the loss function does not depend on the input. In this special case, the setup in Section 4 simplifies considerably, e.g., we do not need to consider the mean field limit of loss functions anymore. However, there are practically relevant loss functions that depend on the inputs. A common example is given by unsupervised loss functions as defined in [26, Def. 2.8], which encompass for instance loss functions used for density level detection, cf. [26, Example 2.9]. Finally, in some applications loss functions with all three arguments appear. For example, in classification problems the cost of misclassification might depend on the input (i.e., there are some instances where misclassification is more costly than in others). By treating the general case (as formalized in [26, Def. 2.1]), all of these cases are handled by our results. We hence decided to use this general form here. **Strengths** Finally, we would like to point out that apart from 1) and 2) mentioned in the review, this submission contains two other substantial contributions. * The limiting function alluded to in 1) is actually contained in the mean field limit kernel RKHS (and as a byproduct, we can even control the corresponding RKHS norm). While intuitively clear, showing this is highly nontrivial and has not been established in [15], which allows us to "close the commutative diagram" (Fig 1). * In Section 4, we provide an appropriate mean field limit setup of the standard statistical learning theory setting, which allows us to formulate and prove the various mean field limit results using mean field limit kernels. To the best of our knowledge, this has not been done before, and required some additional technical contributions (e.g., Prop 2.1 and Def. (10)). While we only provided an initial investigation of this setup, we expect that interesting follow up work can be conducted here (actually, some of it is already in progress). [R1] Lu, F., Maggioni, M., & Tang, S. (2021). Learning interaction kernels in heterogeneous systems of agents from multiple trajectories. _The Journal of Machine Learning Research_, _22_(1), 1518-1584. [R2] Christmann, Andreas, and Ingo Steinwart. "How SVMs can estimate quantiles and the median." _Advances in neural information processing systems_ 20 (2007).
null
null
null
null
null
null
Learning to Reason and Memorize with Self-Notes
Accept (poster)
Summary: The paper presents a general prompting approach for LLMs: instead of generating intermediate "thoughts" after processing the prompt as Chain-of-Thought (CoT), this paper proposes adding "Self-Notes" **while reading the prompt**. That is, while the initial reading of the prompt, the model can generate intermediate Self-Notes that defer the reading of the rest of the prompt until the end of generating the Self-Note. The approach is evaluated across multiple reasoning benchmarks, with supervised models, semi-supervised models, unsupervised models (which I'd call "unlabeled" models, because they *are* supervised on an augmented version of the supervised input), and few-shot prompted models. Strengths: ## Strengths * The idea to generate Self-Notes *while processing the input*, rather than at the end, is simple and novel. It is inspiring to see how such a simple idea has not been used before. * The paper evaluates multiple datasets and tasks: Toy-Story (a synthetic multi-hop QA), algorithmic & boolean (a synthetic source code simulation), Chess Piecetype & Chess Move (a synthetic chess simulation). * The paper evaluates multiple supervision scenarios: supervised models, semi-supervised models, unsupervised models (which I'd call "unlabeled" models, because they *are* supervised on an augmented version of the supervised input), and few-shot prompted models. * The Related Work places the paper well in the related literature. * The paper is well-written and easy to follow Weaknesses: ## Weaknesses * The main weakness is the relatively small models that were used: * In supervised and semi-supervised settings, the base model for both the baseline and the model that Self-Notes is applied on is a **GPT-2-base** (there is actually no `gpt2-base` model on Huggingface. Do the authors mean `https://huggingface.co/gpt2`? If so, this is the **smallest** version of GPT-2, with 124M parameters). While providing a good initial indication, I am not sure that the conclusions for GPT-2 will hold for larger models (because of "emergent abilities" and generally very different behavior of GPT-2-sized models compared to larger models). Even if the authors have access only to a single GPU, I think that there are larger models that can be fine-tuned on a single GPU. * The few-shot prompting experiments were performed on **GPT-J**, which is a bit disappointing - these days, it is relatively easy for the broad community to experiment with large models via prompting API, and new few-shot prompting approaches such as Self-Notes can be easily evaluated on much stronger models such as GPT-3/4 / PaLM / Claude / Codex. * For the GSM-8K benchmark, the few-shot prompting experiments were performed using **GPT-3** (since not mentioned otherwise, I am assuming `text-davinci-001`). Since these experiments could be easily evaluated using `text-davinci-002`, `text-davinci-003`, `code-davinci-002` by just replacing the model name, I am left with the conclusion is that Self-Notes does not work with these stronger models. While these experiments are extensive, I believe that they do not confirm the strong claims of the paper, which hurts its soundness. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: ## Questions 1. Wouldn't the approach be stronger if the question was provided **first**, before the context input? Currently, "The model can use notes as a form of working memory by **writing information that might be useful in the future**" (Lines 76-77). But how would the model know what information might be useful in the future, if it hadn't processed the question yet? 2. I would rename the "unsupervised Self-Notes" to "Unlabeled Self-Notes" or something in this spirit, because the model **is** supervised, but as far as I understand, on augmentation of the dataset, rather than on a manually-labeled version of the dataset. 3. Imagine that you had the ability to run GPT-3/4 locally. Isn't expecting the model to generate self-notes much slower than standard prompting? in standard prompting, the model reads the entire prompt in a single forward pass, where all prompt tokens can be processed in parallel. In Self-Notes, don't we must feed the prompt tokens one-by-one, to check if the model has predicted the next token to be "start-of-note"? (and in that case, stop feeding the original prompt, and feed the generated note instead?) ## Summary Overall, the idea presented in this paper is really nice and novel. I also appreciate the authors' empirical efforts across various benchmarks and supervision scenarios. Unfortunately, the idea seems not to work with models newer than the first version of GPT-3 and supervised models that are larger than `gpt2` (small), and thus I gave low "soundness" and "contribution" scores, and overall a "weak accept" score. I would have given a higher score if the paper had demonstrated that the same idea can be useful to improve stronger models such as `text-davinci-002`, `text-davinci-003`, `code-davinci-002`, `gpt-3.5-turbo`, `gpt-4`, and supervised models that are larger than `gpt2` (even `gpt2-large` / `gpt2-xl` can be fine-tuned on a single GPU, I believe). I hope that the authors would strengthen the evaluation for the next version by evaluating with stronger base models. Although I vote for acceptance, I will understand if the other reviewers will argue for rejection because of these evaluation limitations. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 4 excellent Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comprehensive review. In particular, we appreciate the thorough and well understood list of our contributions including our empirical efforts across various benchmarks and supervision scenarios. We have addressed your comments regarding the GPT3+ and other OpenAI models in the response to all reviewers. --- > In supervised and semi-supervised settings, the base model for both the baseline and the model that Self-Notes is applied on is a GPT-2-base (there is actually no gpt2-base model on Huggingface. Do the authors mean https://huggingface.co/gpt2? If so, this is the smallest version of GPT-2, with 124M parameters). While providing a good initial indication, I am not sure that the conclusions for GPT-2 will hold for larger models (because of "emergent abilities" and generally very different behavior of GPT-2-sized models compared to larger models). Even if the authors have access only to a single GPU, I think that there are larger models that can be fine-tuned on a single GPU. Thank you for pointing this out. We do indeed use the smallest GPT-2 model (124M parameters), and will update our manuscript to reflect so. While it’s true in general that there are different properties in larger models, we observe that the GPT-2 result trends are consistent with the few-shot prompting experiments on the larger models. The main objective of this work is to experimentally validate the difference between in-context thoughts (Self-Notes) vs post-context thoughts (CoT/Scratchpad) on a fixed model, which we do across a variety of models of different sizes. --- > The few-shot prompting experiments were performed on GPT-J, which is a bit disappointing - these days, it is relatively easy for the broad community to experiment with large models via prompting API, and new few-shot prompting approaches such as Self-Notes can be easily evaluated on much stronger models such as GPT-3/4 / PaLM / Claude / Codex. We did evaluate using a GPT-3 model (text-davinci-001). Note that for cost as well as model openness reasons, we didn’t evaluate our models on a whole array of models accessible via a paid API. --- > Wouldn't the approach be stronger if the question was provided first, before the context input? Currently, "The model can use notes as a form of working memory by writing information that might be useful in the future" (Lines 76-77). But how would the model know what information might be useful in the future, if it hadn't processed the question yet? Thank you for raising this great question. We agree that a question-driven input, i.e. input placed first, should filter irrelevant Self-Notes for the entity(ies) of interest in the question. However, we’re imagining a curious reader which takes these general notes, and which can ultimately answer questions of a wider range. We note that this can be beneficial for several reasons. First, in many scenarios because the model has already precomputed the answer to a variety of questions, and doesn’t have to “re-reason” each time a new question is asked. Second, for certain tasks, such as program evaluation, it can be hard to predict what entities (variables) can influence the value of the entity of interest since that would require predicting the evolution of the program. Finally, by providing the question after the context, we can evaluate a fair comparison to previous CoT/Scratchpad methods which do the same. --- > I would rename the "unsupervised Self-Notes" to "Unlabeled Self-Notes" or something in this spirit, because the model is supervised, but as far as I understand, on augmentation of the dataset, rather than on a manually-labeled version of the dataset. Thank you for bringing this up. We see from your description that the term Unsupervised Self-Notes could be interpreted the wrong way. The model is trained with a supervised loss on the answer, but the self-notes are unsupervised. We will update this in our final manuscript. --- > Imagine that you had the ability to run GPT-3/4 locally. Isn't expecting the model to generate self-notes much slower than standard prompting? in standard prompting, the model reads the entire prompt in a single forward pass, where all prompt tokens can be processed in parallel. In Self-Notes, don't we must feed the prompt tokens one-by-one, to check if the model has predicted the next token to be "start-of-note"? (and in that case, stop feeding the original prompt, and feed the generated note instead?) Self-Notes is expected to run slower than standard prompting since it generates additional tokens, and as suggested, the input can’t be processed in parallel. Note that we only check for a Self-Note generation at the end of statements/sentences which makes it more efficient than performing this check after every token. Comparison with Scratchpad/CoT is task-dependent though. In our experiments, we found Self-Notes to be faster than scratchpad for certain tasks (e.g. Algorithmic where scratchpad has to copy the entire program), and slower for others (e.g. Toy Story). Speedups can also be achieved by caching hidden states of previous forward passes. --- Rebuttal Comment 1.1: Title: Discussion period Comment: Dear authors, Thank you for your response. I appreciate your efforts and I like the elegant and simple idea proposed in this paper, but I am left with the conclusion that Self-Notes does not work with stronger models than `GPT2-small` and `GPT-3`. >"we observe that the GPT-2 result trends are consistent with the few-shot prompting experiments on the larger models" I do not agree with this claim, since no experiments were presented to support it, and there is much evidence that it is quite the opposite (e.g. ["Emergent Abilities", Wei et al., 2022](https://arxiv.org/pdf/2206.07682.pdf)). >We did evaluate using a GPT-3 model (text-davinci-001). Note that for cost as well as model openness reasons, we didn’t evaluate our models on a whole array of models accessible via a paid API. While I am a strong supporter of openness as well, I cannot accept these arguments: 1. According to OpenAI [[pricing #1]](https://openai.com/pricing#language-models) [[pricing #2]](https://platform.openai.com/docs/deprecations/), **`text-davinci-001` is not cheaper than `text-davinci-002`, `text-davinci-003`, `gpt-3.5-turbo`**. The authors could easily run experiments with these models for the same cost, with a change of a single string. 2. `code-davinci-002` is **free** and still accessible. 3. There are various open-source models, across a variety of sizes, that the authors could have experimented with such as LLama, LLama-2, Vicuna, Falcon, etc. 4. Models such as `gpt2-large` / `gpt2-xl` can be fine-tuned on a single GPU, and the authors persist to use only `gpt2-small` with no convincing justification. The authors argue that "there is nothing inherently limiting about our proposed method as we scale the size of the model up", which is true except for the fact that they did not demonstrate it, and the returns may diminish with larger models. The authors' refusal to consider experiments with newer and larger models leads me to no conclusion other than that Self-Notes does not work with stronger models than `GPT2-small` and `GPT-3`. I hope that the authors will convince me otherwise by the end of this discussion period, or acknowledged that Self-Notes's benefits are limited to smaller models. In the meantime, I am reducing my score. --- Reply to Comment 1.1.1: Comment: Thank you for your response. We have addressed your concern in a comment to all reviewers: https://openreview.net/forum?id=ZFwNdsDCRL&noteId=o2bvfXVmWz
Summary: The authors introduce a method called "Self-Notes" that allows the model to think and write down its thoughts during the reasoning process. Unlike other approaches, this method enables the model to deviate from the input context, integrate previous reasoning steps, and enhance its memory with useful information. The experiments show that the Self-Notes method outperforms chain-of-thought and scratchpad methods by interleaving the input text with the model's notes. Strengths: 1. The authors propose a method called "Self-Notes," which involves jotting down thoughts during the reasoning process. 2. The experiments compare their method with different algorithms and demonstrate the advantages of their approach in supervised, semi-supervised, and unsupervised learning settings. Weaknesses: 1. The "Supervised Self-Notes" experiments were conducted using GPT-2. However, it is recommended to use larger open-source models like LLaMA for these experiments. 2. It is recommended to conduct "Semi-supervised Self-Notes" experiments for all tasks shown in the paper. 3. It is preferable to use the same evaluation set for all three types of experiments. For instance, Tables 4 and 5 utilize math word problems for evaluation, which are not included in Tables 2 and 3. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. The performance of GPT-3 is significantly low in table 5. What could be the reason behind this? Have you considered using GPT-3.5 for the experiments instead? 2. Is it possible to generalize the "Self-Notes" data from one task to different tasks? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The authors have discussed the limitation and broad impact in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review. We have addressed your comments regarding LLaMa and GPT-3.5 in the comment to all reviewers. ---- > It is recommended to conduct "Semi-supervised Self-Notes" experiments for all tasks shown in the paper. Across the two tasks we conducted semi-supervised experiments on (Toy-Story and Algorithmic), we observed a consistent trend of Self-Notes outperforming the vanilla model at less than 25% supervision. The objective of these experiments was to show that it’s possible to achieve good performance even when we don’t have every sample in the training set labeled with Self-Notes, which can be expensive. We expect this trend to remain across other tasks. ---- > It is preferable to use the same evaluation set for all three types of experiments. For instance, Tables 4 and 5 utilize math word problems for evaluation, which are not included in Tables 2 and 3. We did not run the fine-tuning settings (supervised, semi-supervised, unsupervised) on the math word problem datasets because they are small datasets (MultiArith only has 600 samples), and thus is better fit for few-shot prompting where we can evaluate on all samples. ---- > The performance of GPT-3 is significantly low in table 5. What could be the reason behind this? Have you considered using GPT-3.5 for the experiments instead? Thank you for raising this question. Greedy decoding, 5-shot prompting instead of 8-shot prompting used by Wei et al. 2022, and use of text-davinci-001 are the main reasons for the reported performance. Our emphasis was to compare Self-Notes relative with CoT, rather than their absolute performance. See also the comment to all reviewers for more detail on this. ---- > Is it possible to generalize the "Self-Notes" data from one task to different tasks? This is a great question. We expect that if the model is trained on a diversity of tasks with Self-Notes supervision, the model can generalize to novel tasks, as has been the case with instruction-tuned models that generalize to novel instructions when trained with a rich diversity of instruction-based tasks. That would be very exciting to see in future work. --- Rebuttal Comment 1.1: Title: Discussion period Comment: Dear Authors, Thank you for your response. My primary concern regarding the first weakness mentioned is unresolved in the rebuttal. If you do not want to use GPT-3.5, I'd recommend considering test your method with LLaMa and LLaMA-2. To this end, I've adjusted my score from **5** to **4**. If this concern can be addressed during the discussion period, I am willing to retain my initial score of **5**. --- Reply to Comment 1.1.1: Comment: Thank you for your response. We have addressed your concern in a comment to all reviewers: https://openreview.net/forum?id=ZFwNdsDCRL&noteId=o2bvfXVmWz
Summary: One very general and beneficial method for improving the outputs of LMs is called *chain-of-thought* (CoT) reasoning, by which an LM is trained or prompted to first output its step-by-step reasoning before outputting the answer to a problem. This paper proposes a major extension of CoT wherein the LM is trained or prompted to insert its reasoning (as *Self-Notes*) at any relevant points in the token stream, rather than waiting until the end of the prompt, which can be quite long when the problem description is long and complex and contains example solutions. Self-Notes also serve as a form of in-context memory, summarizing key inferences made by the model along the way. The new method is evaluated against LMs with and without CoT, on a wide range of datasets, and employing four different learning paradigms: Supervised, semi-supervised, unsupervised, and few-shot prompted. The experiments clearly demonstrate significant performance gains due to Self-Notes in most settings, particularly in generalizing to cases not seen in training. Strengths: This work is sound and compelling. The presentation is especially clear and easy to follow. The results are particularly interesting given the current importance of LLMs, and the field’s focus on finding ways to improve LLM reasoning even further. Weaknesses: Most of the experiments were performed using GPT-2, which seriously lags behind the current generation of LLMs in terms of output quality. A few experiments in this work involved GPT-3, but the field has changed dramatically since the introduction of GPT-3.5 and especially GPT-4, which is particularly good at leveraging few-shot prompts. This work’s contribution could be much greater if the experiments included GPT-4. Section 4.4 mentions that GPT-3 was called through the public API. Given this constrained access, Self-Notes were intentionally limited to appear only at end-of-sentence positions. The same could be done when using GPT-4 through the public API. But the small margin of improvement over CoT shown in Table 5 (using few-shot prompting) raises the question of how effective Self-Notes would be when limited to end-of-sentence positions using GPT-4. All of the examples from the datasets in this work share a certain underlying structure: step-by-step revelation of partial information, in a way that benefits from progressive inference and ongoing memory to track the evolving state. Such tasks seem ideally suited to the insertion of Self-Notes into the token stream. By contrast, the benefit of Self-Notes would be more surprising (and valuable) in the broader set of reasoning tasks that don’t share this special structure. For instance, consider the problem from section 8.2 in [2], where the LLM is tasked to modify exactly one number in an algebraic equation in order to make the right hand side equal a particular value. A chain of reasoning is required here, but the facts in the problem statement are not arranged in an obvious way to provide convenient insertion points for Self-Notes. The significance of Self-Notes would be greater if their benefit were shown to extend to other classes of important problems. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: What are the numeric results shown in Table 4 & Table 5? Accuracies? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: No concerns. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comprehensive and valuable review. We appreciate the nice summary and the comment highlighting the importance of improving LM reasoning with Self-Notes. We have addressed your comments regarding GPT-3.5 and GPT-4 in the response to all reviewers. ---- >All of the examples from the datasets in this work share a certain underlying structure: step-by-step revelation of partial information, in a way that benefits from progressive inference and ongoing memory to track the evolving state. Such tasks seem ideally suited to the insertion of Self-Notes into the token stream. By contrast, the benefit of Self-Notes would be more surprising (and valuable) in the broader set of reasoning tasks that don’t share this special structure. For instance, consider the problem from section 8.2 in [2], where the LLM is tasked to modify exactly one number in an algebraic equation in order to make the right hand side equal a particular value. A chain of reasoning is required here, but the facts in the problem statement are not arranged in an obvious way to provide convenient insertion points for Self-Notes. The significance of Self-Notes would be greater if their benefit were shown to extend to other classes of important problems. We appreciate the careful analysis of the datasets we used. Self-Notes is a *general* method to allow language models to write itself notes at any time, as described in Figure 1 and nicely summarized in your review. It is easily generalizable to other tasks not described in this paper. In particular, we consider Chain-of-Thought to be a special case of Self-Notes, where the note can only come at the end of the context. For the algebraic problem described, the Self-Note could be written after it has seen the full context, which is similar to CoT. While the goal of this work was to compare to the types of tasks that CoT/Scratchpad are typically used for, implementing Self-Notes on other tasks such as the algebraic problem is an exciting direction and a great recommendation. ---- > What are the numeric results shown in Table 4 & Table 5? Accuracies? Yes, these are accuracy (%), thank you for bringing this to our attention. We have updated the manuscript to reflect this. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for the clarifications!
Summary: This paper introduces a variation to the chain of thought and scratchpad techniques that can easily be applied to pre-trained transformers. While reading a passage, the model can insert "self notes" at any point in the input sequence. These self-notes allow the model to perform chain-of-thought reasoning, by annotating the input sequence with additional information about characters, variables, etc. Self-notes differ from standard COT because the model can create multiple notes, and place those notes at appropriate locations in the input sequence. The authors compare several different ways of training a model to insert notes: supervised, semi-supervised, unsupervised, and prompting. The "unsupervised" method is the most interesting, because it requires the least amount of human-generated annotations, although it does require that the model be fine-tuned on a QA dataset. IMO, it's not quite "unsupervised", simply a clever way of using existing QA datasets to train a model to "ask itself questions". All three methods produce improvements over SOTA on a variety of tasks. Strengths: This paper was very clearly written. I especially applaud the authors for concisely illustrating their main contributions in a figure that appears right before the opening introductory paragraph. The self-note mechanism is a fairly simple extension to prior work on scratchpad and chain-of-thought reasoning, but I fully expect it to be high-impact. The technique is easy to understand, easy to implement, works with pre-trained models, and results in large and obvious improvements on reasoning tasks. The experiments are well-designed; the authors compare multiple training methods on multiple tasks, and the chosen tasks are appropriate. Weaknesses: The main limitation of this work is (1) the requirement for human annotations in the supervised and semi-supervised cases, or (2) the requirement for a Q/A dataset in the unsupervised case. The size of the models are relatively small (GPT2), and the tasks are fairly simple, and partially synthetic. Additional work may be needed to scale this technique up, especially since existing QA datasets are quite small compared to the training corpus for modern LLMs. However, I feel that some combination of unsupervised training and prompting could probably overcome the scaling challenges. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Have you considered how this technique could be scaled to LLM size? What potential problems do you expect to encounter? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors do not discuss societal impacts. They do briefly mention scaling as an important area for future work, but do not go into any detail about the limitations of their training techniques. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the careful review and helpful comments. We appreciate the attention to detail, the concise description of contributions, and beneficial suggestions. Thank you also for highlighting the potential high-impact of our work on future research. We have addressed your comments regarding the GPT2 experiments and scaling to LLM size in the response to all reviewers. ---- >The main limitation of this work is (1) the requirement for human annotations in the supervised and semi-supervised cases, or (2) the requirement for a Q/A dataset in the unsupervised case. Thank you for pointing out this important detail. We note that scratchpad and few-shot chain-of-thought also require human annotations and Q/A datasets. So while it is a drawback compared to vanilla training, it requires the same amount of annotations as Chain-of-Thought/Scratchpad. The goal of this work is to experimentally validate the difference between in-context thoughts (Self-Notes) vs purely post-context thoughts (CoT/Scratchpad). We have added this discussion to the manuscript. ---- > The tasks are fairly simple, and partially synthetic. We aimed for a wide variety of tasks in several domains, demonstrating the advantage of our method in both synthetic and real-world tasks. We followed Fan et al. 2020 and Anil et al. 2022, for synthetic testbed tasks, and followed Wei et al. 2022 and Toshniwal et al. 2021 for real-world tasks. We specifically chose these datasets as we believe they most appropriately evaluate/demonstrate the ability of a model to do multi-step reasoning and state-tracking. ---- > Additional work may be needed to scale this technique up, especially since existing QA datasets are quite small compared to the training corpus for modern LLMs. However, I feel that some combination of unsupervised training and prompting could probably overcome the scaling challenges. Thank you for raising this important point. There are several recent works showing that small-scale fine-tuning datasets (e.g. “LIMA”, “AlpaGasus”) work well with large pre-trained models. So we don’t expect there to be an issue fine-tuning large models on a small set of labeled Self-Note samples. We also show that we can get good performance with Self-Notes using few-shot prompting, where we only require a few labeled instances. We do agree that some combination of unsupervised training and prompting could lead to even greater gains, this is a great suggestion and an interesting direction to pursue. ---- > The authors do not discuss societal impacts. They do briefly mention scaling as an important area for future work, but do not go into any detail about the limitations of their training techniques. Thank you for bringing this important missing discussion to our attention. We will add the following. We don’t expect any negative societal impacts as a direct result of our method. We acknowledge that there are already significant potential impacts with language models in general, but we don’t expect our work to mitigate any of such existing issues, other than perhaps adding an extra degree of interpretability by examining the self-notes themselves. As with all tokens generated by generative language models, there is a risk of hallucinations in the Self-Notes. The practical limitations of Self-Notes are similar to those of Chain-of-thought/Scratchpad. Namely, most applications require labeled Self-Note annotations in order for the model to learn how to generate them. Lastly, generating Self-Notes is more costly than vanilla Transformer inference, but can be more efficient than CoT/Scratchpad depending on the task. --- Rebuttal Comment 1.1: Comment: Thank you for these responses.
Rebuttal 1: Rebuttal: We would like to thank all reviewers for the invaluable feedback. We greatly appreciate the encouraging and helpful comments which have made our paper stronger. All reviewers pointed out the main contributions of our work and noted the potential impact of our work on future research. We respond to individual comments below each review, and make a general response to comments regarding our experimental setup here. $\newline$ We show extensive and diverse experiments across 4 training and inference settings (supervised, semi-supervised, unsupervised, few-shot prompted), on 7 total datasets, using various models ranging from 124M to 175B parameters in size. Importantly, our proposed method applies generally across all model sizes, datasets, and training/inference settings that we have tested. The fine-tuning experiments were performed with smaller models (GPT-2) since we use a large number of datasets and training settings, which takes a substantial amount of resources. The few-shot experiments were performed with larger models (GPTJ and GPT3 text-davinci-001), where a limiting factor was the financial constraints of a paid API. There is an extensive amount of work which show that techniques to improve reasoning and state tracking improve as models are scaled up (Nye et al. 2021, Wei et al. 2022, Anil et al., 2022, Wang et al. 2023, Yao et al. 2023, Touvron et al. 2023). These works include both fine-tuning and few-shot prompting. Aside from being more expensive than vanilla training/inference, there is nothing inherently limiting about our proposed method as we scale the size of the model up. We note that the GPT-4 API general availability wasn’t until July 6, 2023, so it was not considered for this submission. GPT-4 was also trained on GSM8K samples which include CoT explanations (OpenAI, 2023), and there is anecdotal evidence that this is the same for GPT-3.5 (Fu et al., 2023). We used a diverse set of models for our experiments to demonstrate the usefulness of our method. While there are more proprietary models (GPT3+, Claude, etc), they are not open, so we do not know the training data which may add unknown bias to results (e.g., techniques in Lightman et al., 2023), and are expensive so we can only run a certain amount of experiments, which we also unfortunately lacked funds for. Thus, in general we believe the community should push for the use of open models for reproducible science rather than necessitating the use of such models. While models will always be improving and there will be new models to test a new method on, we ran extensive experiments and compared results for self-notes vs scratchpad/CoT for a fixed model. It seems clear that as models get stronger, they will be able to write better Self-Notes. We are hence excited about the applications of Self-Notes that will be enabled in the future.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Human-in-the-Loop Optimization for Deep Stimulus Encoding in Visual Prostheses
Accept (poster)
Summary: The authors present a method to improve perceptions for patients using visual prostheses using a combination of a deep learning encoding model and a patient-specific tuned set of parameters for stimulation learned via preferential Bayesian optimization. They use simulated data and patient choices to demonstrate the method’s performance across a range of possible patient types and noise considerations. ETA: After review and rebuttal, raising my score from a 5 to a 6. Strengths: Originality: This work presents a nice combination of deep learning and Bayesian optimization. Each component of their approach (phosphene model, encoding model, and parameter optimization) has in general been done before, but here the authors modify each element and combine them to overall improve performance. The phosphene model is described as a new model with performance exceeding current SOTA. This was then combined with a deep stimulus encoder (DSE) to yield performance comparable to other methods. Finally, their human-in-the-loop (HILO) method to tune parameters using PBO was used to iteratively improve the DSE per patient. Quality: In general the work is well-sourced and supported from many previous works. The benchmarking and evaluation of performance was good, and the authors compared to similar methods. The code in particular was very well written and documented. Clarity: Overall well-written, well-sourced, nicely explained work. There were some points of confusion or ambiguity mentioned below. Significance: This kind of patient-specific tuning of stimulation for visual prostheses is certainly needed, and the authors do good job of demonstrating the significance of their contribution. The authors also suggest it could be used in other areas beyond visual stimulation, but with the results being tied closely to the patient-specific parameters in Table 1 it is unclear how broadly this method might be applied. Weaknesses: The primary weakness is the reliance on simulated data, and it is not clear that simulated patient choice or patient parameters to generate the data was sufficiently grounded in experimentally collected datasets. There is some discussion of how the ranges for the parameters were chosen in their simulated data, but nothing about the expected variability and how these parameters were determined in the first place (is it considered ground truth? were patient preferences included?). If the method is sufficiently generalizable to many populations of patients, as expected, then why not simulated patients outside of those limited ranges in Table 1? Lines 263-265 appear to suggest that their ranges of parameters are unlikely to be same as in the real world. A similar concern is for simulating patient preference, but here the authors' simulations (up to 33% incorrect decisions from the patient) appear like a robust demonstration. One other weakness not discussed is the feasibility of their approach for actual human patients. Even if it only takes 3 seconds to update the GP and generate new stimuli, the patient must sit through and compare difficult to discern visual percepts for 100 duels. Is it likely the patient would persist? Does their judgement change over time? If this could be changed every so often, does the system need re-tuning for all ~100 duels at once? Additionally, humans see visual scenes with certain image statistics -- so MNIST may not be a reasonable or sufficient dataset for testing human perception on. Could this approach be extended to natural images, including color? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: i. Line 43 says <30 is too few parameters, but then the authors used only 13 dimensions of parameters? ii. Line 129-130 appears to have a sentence cut off abruptly. iii. Line 127: BO can’t optimize more than a small number of parameters -- clarify please? BO gets difficult in high dimensions, but this is true of many methods. How specifically does the DSE reduce this? iv. Clarify the issue in lines 154-158. The logic is not clear what the exact problem is and how it is solved here. v. If you didn’t restrict the range of parameters in Table 1 as heavily, would it still find the optimal values? E.g. if you increased the range by a factor of 2. vi. Line 232 first mention of Argus II, needs explanation and citation. vii. What does it mean to outperform the true DSE? Line 289 viii. Line 296 how does the approach fail? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: The authors addressed some limitations in the discussion, noting some drawbacks of predicting the behavior of deep learning algorithms but that hardware-engineered safety constraints would provide patient protection. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive review and insightful analysis. ## Weaknesses > Simulated Data and Patient Ranges We agree that proper selection of patient parameters is crucial for realism. The process of choosing these ranges of patient parameters was complicated by the variety of data sources [8,10,11,18,19,37,62], each using very different experimental setups. We used fits from the mentioned sources, based on psychophysical tasks such as drawings and brightness ratings, to obtain a distribution of possible values in patients. Our ranges were chosen empirically to encompass all observed patients with a wide additional margin (~2x the observed distribution), adjusted to be within allowed boundaries (e.g. $\lambda$ cannot be larger than 1). Therefore, the ranges in Table 1 are actually quite wide, more than account for all reported data, and only appear small due to the units. For example, $a2$ goes from 0.005 to 0.025, meaning that a frequency increase from 20 to 120 Hz would lead to a percept being between 1.5x and 3.5x as bright, a larger range than observed in patients [19]. Unfortunately, we do not have time to retrain our DSE and run experiments again with increased ranges. However, HILO still led to large improvements for out of distribution patients (even without retraining, Fig 4), so we do not expect patients outside our ranges to prevent HILO from performing well. > 263-265 suggest that their ranges of parameters are unlikely to be same as in the real world. We only included this to acknowledge that any estimate is likely slightly different from reality. The only result that depends heavily on these ranges is the mean DSE baseline, which uses the mean value of the reported ranges as the guess for $\phi$. This is of course expected to work better on our simulated patients (from the exact same range) than on real patients (likely a similar, yet slightly different range). > Practical feasibility, changing perception, and natural images: The reviewer raises interesting questions. Based on the difficulty of standard clinical tasks [63] often involving hundreds of trials over multiple hours, we do not expect HILO to be too difficult. We also note that our method showed clear performance improvements after only 20 duels. Prior work suggests that phosphenes remain relatively stable over time [46]. If perception changes, HILO is short enough (~17 minutes if all 100 duels are needed, 10 seconds per duel) that it could easily be performed again. The reviewer is certainly correct that perceiving MNIST digits is not the eventual goal of visual prostheses. However, seeing even simple characters without prolonged head scanning would be a significant improvement in perception for existing devices [63]. DSE’s have already shown improvements for natural images [14,12], which could enable a similar approach to ours with complex targets. Unfortunately, detailed color perception is not possible with current visual prostheses [47]. ## Questions > Line 43 says <30 is too few parameters... This is saying that PBO conventionally requires less than 30 parameters. > Line 127: BO can’t optimize more than a small number of parameters -- clarify please? … How specifically does the DSE reduce this? Although many optimization methods become difficult in higher dimensions, PBO quickly becomes inefficient with more than ~30 parameters [17]. In contrast, DNNs excel at high dimensional optimization. Our stimuli are high dimensional: 225 electrodes, each with 3 stimulation parameters. Further, we want to optimize stimuli not just for one target image, but for any target image in the training set. This is a regime where direct BO is clearly not possible. Our DSE is first trained to output optimal stimuli for any input patient (specified by $\phi$). Then, BO is used to infer $\phi$ for a new patient. Once known, the DSE can produce optimal stimuli for that patient. $\phi$ is low dimensional (13 parameters), and thus is a suitable candidate for BO, whereas the original stimulation pattern (675 parameters) is not. Thus, the DSE takes care of the high-dimensional optimization, while BO incorporates human feedback. > Clarify the issue in lines 154-158... We apologize for the confusion. The problem with existing phosphene models (besides inferior performance) is that they are too slow for gradient descent and too large for GPUs. We also observed that training over large ranges of patients with previous models was unstable and did not converge. Lines 154-158 are some potential reasons for this, but ultimately it is an open area of research (effects of nonstandard NN functions on gradient loss landscape). Since we are uncertain in this regard, we will remove these lines. > Line 232 first mention of Argus II... We apologize for the omission, and have added a citation and explanation. Argus II is the most successful commercial visual prosthesis, with ~300 users [3], and was used to collect the data used for Table 1. > What does it mean to outperform the true DSE? For patients whose perception is precisely predicted by our phosphene model, the DSE with true $\phi$ is likely the ideal encoder for HILO. In reality no phosphene model is perfect, so it’s important for encoders to work when the assumed model is misspecified. For these patients, the true DSE is no longer guaranteed to be optimal, because the DSE was trained on non-misspecified patients. Our revised Fig 4 better illustrates that the True DSE performs worse on misspecified patients, while HILO performance is not as affected. HILO allows for users to guide optimization towards encoders that perform well for their misspecified forward models, a strong indicator that HILO is robust and has the potential to work on real patients. > How does the approach fail? For 1 of the 100 patients, HILO led to a final performance that was between the mean and random DSE baselines, but still an improvement over the naive encoder used by Argus II. --- Rebuttal Comment 1.1: Comment: Thank you for the additional detailed explanations. If the authors include their revised figures and additional details in the text, I think the paper will be greatly strengthened. I have read the other reviews and rebuttals, in particular the concerns raised by reviewer YFQD. Including the authors responses to clarify the application-driven nature of their work (vs new theory in ML) and to clarify some technical details on retinal prostheses, would make the paper much better, and I would raise my score to a 6. --- Reply to Comment 1.1.1: Comment: Thank you for the response. We are happy to include details requested by the reviewers in the revised/final version (open review doesn’t allow a revised version now, only the final version if accepted). This will include the updated figures, details on retinal prostheses and clarification of the application emphasis of the paper.
Summary: In this work, the authors proposed a pipeline for optimizing the deep neural network based encoder, which is to generate visual stimulus for neuroprostheses. The pipeline considers human-in-the-loop optimization. The authors use preferential Bayesian optimization techniques to reduce the number of queries to the human making the decision. The major contributions claimed by the authors are as follows: 1. Introduced a forward model for retinal implants suitable for fast simulation experiments. 2. Proposed a personalized human-in-the-loop (HILO) patient-specific stimulus parameter optimization strategy using preferential Bayesian optimization. 3. Demonstrated HILO can quickly learn personalized stimulus encoder and the robustness of HILO, on simulated patients. Strengths: Originality and significance: As far as I know, this work is the first work in considering the patient-specific stimulation parameter calibration problem for visual neuroprostheses that does not assume direct access to the output of the forward model. Instead, the authors introduce the technique of human-in-the-loop training with preferential Bayesian optimization in the pipeline. This setting is steps more realistic than previous works. Quality and clarity: The writing in general is very structured and easy to follow. Weaknesses: 1. It is not made super clear whether and how the parameters w of the encoder is being updated in the manuscript. 2. The authors could have demonstrated a small amount of humans with sight to show the effectiveness of this optimization method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The simulated patient has access to t, which is the target percept or ground truth. Is it realistic to assume the blind person could have access to that? 2. In line 219 it’s written that the metric in equation (7) is also used to train the deep stimulus encoder (DSE). So the encoder’s parameters are getting updated in a rather conventional way as in the visual prosthesis literature. How to distinguish the contribution between the improvement of the encoder and the selection of patient-specific parameters? Could it be that the update of the encoder contribute the most? Ablation studies might be needed. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Other than the concerns listed in the weakness and question sections, the limitations mentioned in the work are adequate. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to the reviewer for their encouraging comments and thoughtful questions. ## Weaknesses > 1. It is not made super clear whether and how the parameters w of the encoder is being updated in the manuscript. We apologize for any confusion, and will update the paper with improved notations and clarity in this regard. During DSE training, the weights $w$ of the encoder are updated using standard gradient descent with the Adam optimizer. The loss function minimized during training is the reconstruction error between target images and the predicted percepts resulting from stimulation (Eq 7). Thus, the encoder learns to output optimal stimuli for given target images for the input patient specified by $\phi$. This all occurs before HILO begins. During HILO, the weights of the encoder are held constant, and are not updated. The only parameters optimized during HILO are the patient-specific parameters $\phi$, using the PBO strategy described in the paper. > 2. The authors could have demonstrated a small amount of humans with sight to show the effectiveness of this optimization method. We agree that this would have been an ideal way to verify whether our simulated subjects make decisions in the same way humans do. Unfortunately a human subjects experiment was outside the time and scope of this project. However, in [30], authors also conducted PBO on sighted humans in a similar, albeit much less realistic setting. Despite using a linear decoder and unrealistic phosphene model (as opposed to our deep neural network encoder and realistic phosphene model), their results still showed that PBO could be used with sighted humans viewing simulated prosthetic vision. ## Questions > 1. The simulated patient has access to t, which is the target percept or ground truth. Is it realistic to assume the blind person could have access to that? The reviewer raises an excellent point. With human patients, the subject indeed could not directly observe the target t. However, the user could be informed indirectly of the target, either verbally (e.g. asking ‘which looks more like the number 5’) and/or with tactile targets the subject could feel. In [30], authors demonstrated this with human subjects, who were only verbally told the target image. We do note that communicating targets with verbal or tactile representations would be limited to targets simple enough to communicate to the patient during training, such as letters, numbers, or simple objects. However, perception of simple stimuli would still be a huge improvement for current devices, which currently barely support letter recognition without prolonged head scanning [63]. Further, prior work ([12, 14, 36]) demonstrated that DSE’s also improve performance for natural images. We leave it as a question for future research, to determine how encoder improvements from HILO with simple stimuli transfer to more complex visual scenes. > 2. In line 219 it’s written that the metric in equation (7) is also used to train the deep stimulus encoder (DSE). So the encoder’s parameters are getting updated in a rather conventional way as in the visual prosthesis literature. How to distinguish the contribution between the improvement of the encoder and the selection of patient-specific parameters? Could it be that the update of the encoder contribute the most? Ablation studies might be needed. The reviewer is correct that the metric in equation (7) is used to train the DSE prior to HILO. During HILO, however, the DSE parameters are held constant, and only the patient-specific parameters $\phi$ are modified. Therefore, all of the performance improvements shown in the figures are due to the selection of patient-specific parameters. The other DSE baselines (marked as mean and random in the submitted Figure 3 and 4) are meant to serve as a sort of ablation, illustrating what performance would be like if only the DSE were used, without updating the patient-specific parameters. --- Rebuttal Comment 1.1: Comment: Thank the author for the rebuttal. It addresses my questions 1 and 2. Could you elaborate more on the difference between [30] and improvement over it? --- Reply to Comment 1.1.1: Comment: In [30], authors used PBO to update a linear stimulus encoder based on feedback from sighted humans viewing simulated prosthetic vision. Each participant was shown the target image, encoded with 2 different linear encoders and fed back through a linear phosphene model, and chose which one they preferred. They used a linear approximation to [8] as their forward model (which does not match data from patients, see revised table 2), and a linear matrix inverse of this as their encoder. More generally, their approach is limited to linear forward models and stimulus encoders, and is fundamentally incompatible with more complex systems. Meanwhile, there is an increasingly large body of literature demonstrating that visual perception is a highly nonlinear function of electrical stimulation ([8,10,11,18,19,37,62], among others). Stimulation strategies that rely on assumptions of linearity have been ineffective at restoring high-resolution prosthetic vision in human subjects [3], generally reaching visual acuities much worse than theoretic limits based on device hardware [Stronks and Dagnelie 2013]. Our framework for integrating PBO with a deep stimulus encoder is the first approach that does not require the forward model or stimulus encoder to be linear. This not only led to large improvements, both in terms of realism and performance, for visual prostheses, but also has the potential for much broader applicability. Our HILO strategy could work for any system that can be inverted using a DNN, potentially enabling its use for other modalities outside of prosthetic vision. Other smaller differences: - Our DNN is trained using a perceptual metric (Eq 7). Their encoder is a linear matrix inverse and thus only minimizes MSE, which has been shown to be misaligned with human visual perception [Lin and Kuo, 2011]. - Since we used simulated patients, we were able to test more patients spanning a wider, more realistic range. - We conducted additional experiments demonstrating the robustness of our approach.
Summary: * This paper proposes a flexible framework addressing personalized stimulus optimization predominantly seen in _visual prostheses_. * The authors propose integrating the state-of-the-art deep learning with a preferential Bayesian Optimization (BO) strategy to learn optimal patient-specific parameters in fewer trials. * The proposed approach is a two-stage process, with the first stage aiming at learning a Deep Stimulus Encoder (DSE) to optimize stimuli, and the second stage aiming at embedding the DSE learnt in the first stage in a sample-efficient preferential Bayesian optimization strategy. Strengths: * This paper aims at advancing the state-of-the-art in sensory neuroprostheses that have a significant impact on the present world. * The authors propose to overcome the strong assumptions made on the accuracy of the knowledge on patient-specific parameters, thus aiming to improvise the accuracy of the deep stimulus encoder by optimizing the stimuli. * The authors use a sample-efficient Bayesian optimisation (Human-in-the-loop Optimization (HILO)) method to optimally select the patient-specific parameters used in deep stimulus encoder. * Authors have empirically evaluated the proposed approach in optimizing the personalized stimulus encoder for several simulated patients. Further, the authors evaluate the performance of their method when the preferential inputs are noisy, thereby proving its robustness. Weaknesses: * The main weakness of this paper lies in its _limited novelty_, as the proposed approach is merely a _combination_ of the state-of-the-art of forward models and sample-efficient Bayesian optimization to optimally find personalized stimuli. * Although this paper improvises the techniques used in the visual prostheses domain, the overall contribution of this paper is in the _application of the existing ML/DL techniques_ in neuroprostheses, visual prostheses in particular, thus may not advance the state-of-the-art in ML/DL community. * Though the paper reads well in many parts, it demands significant knowledge/domain expertise in visual prosthesis for a thorough understanding of the paper. * Authors have missed defining/describing a few important mathematical quantities or parameters that break the flow of understanding and also raise questions about its reproducibility (Please refer to the Questions section). * The details provided in _Section 4_ give the impression that the proposed framework is tightly coupled with the problems in visual prostheses. * The authors have discussed a few experimental settings in Section 4 that are apt to discuss in Section 5, though they do not contribute to the proposed method. * The visualization schemes used to show the empirical results do not effectively demonstrate the superiority of the proposed approach. Also, the authors do not discuss in brief the trends and the possible reasons behind them. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * Line 159: What does $n_e$ stand for? number of electrodes? * Line 169: What is the significance of threshold $\epsilon$ and why authors have set it to $e^{-2}$? * Line 198: Why do authors considered only 10 simulated patients for transfer learning? is that not sparse? * Line 200: The hyperparameters considered and their bounds are neither mentioned in the main paper nor in the supplementary material. Could authors confirm the hyperparameter set of each kernel? * Line 214: The noise parameter s is set to be $\frac{1}{0.01}=100$, whereas in Line 285, s is set to 1e-4, is this a typo or it is $\frac{1}{1e-4}$? * Line 221: What is the significance of $\beta$ in equation (7)? * Line 235: What does $R_i$ stands for? Is $i \in$ {$ 1,...,n_e$}? * Line 239: Why authors have not considered comparing their empirical results with Beyeler et al. 2019? * Line 250: How do the results compare with Granley et al. 2022 if L1 is used in computing the test loss for DSE? * Line 302: Could Batch-BO strategies further reduce the latency and thus provide a quicker solution by parallelizing the data acquisition? * Figure 3: Why there is an increasing (or decreasing) trend in loss (or accuracy) plot during the initial phase of the optimization? * Figure 4: Why there is a lot of variance in the bottom right plot for "Out of Distribution" misspecification? Following are the minor (typographical) errors, that needs to be fixed. * Inconsistent referencing style and hyperlinks are missing throughout the paper. It is strongly suggested to not modify the style files provided by NeurIPS. * Line 59: "...visual prostheses that, where ...." * Figure 2: The current font size in the Leftmost plot is illegible. * Line 151: The section/Sub-section number has to be clearly mentioned. * Line 204: Provide a quick reference to the Brier score in the main paper, even though it is mentioned in the supplementary. * Line 237: "ratings as amplitude, frequency,...." * Line 259: :... perceptual loss (figure 3.B)....(figure 3.C)" **References** * Michael Beyeler, Devyani Nanduri, James D. Weiland, Ariel Rokem, Geoffrey M. Boynton, and Ione Fine. A model of ganglion axon pathways accounts for percepts elicited by retinal implants. Scientific Reports, 9(1):1–16, June 2019. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: * The optimization performance of PBO crucially depends on the _ generalization performance of the inherent GP surrogate model_. Authors reduce latency by avoiding online inference and instead use a transfer learning strategy. However, the source considered in the adopted transfer learning uses sparse data generated from 10 patients, thus, raising concerns about the optimality of kernel parameters and generalizability of the GP surrogates. * Although the authors discuss the empirical results, the lack of sufficient experiments and competing baselines (Beyeler et al. 2019, Granley et al. 2022) in the empirical results makes it hard to conclude the superiority of the proposed approach. (Please refer to the Questions section). * Sequential PBO may not be very suitable in time-critical situations, thus, more efficient strategies such as batch-BO could be of great use to reduce the possible latency in finding the optimal stimuli on-the-fly. **References** * Michael Beyeler, Devyani Nanduri, James D. Weiland, Ariel Rokem, Geoffrey M. Boynton, and Ione Fine. A model of ganglion axon pathways accounts for percepts elicited by retinal implants. Scientific Reports, 9(1):1–16, June 2019. * Jacob Granley, Lucas Relic, and Michael Beyeler. Hybrid Neural Autoencoders for Stimulus Encoding in Visual and Other Sensory Neuroprostheses. October 2022. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for the reviewer’s attentive analysis and helpful feedback. ## Weaknesses/Limitations > The main weakness lies in its limited novelty, as the proposed approach is merely a combination of state-of-the-art forward models and Bayesian optimization. As the reviewer points out, our work builds on several previous ideas. However, combining these ideas into a realistic HILO scheme required significant improvements to each individual component, as well as complex design choices to integrate components into the optimization framework. Our specific points of novelty are: - A HILO strategy combining PBO with deep stimulus encoding. Our DSE was trained to output optimal stimuli conditional upon a latent set of patient-specific parameters, allowing PBO to optimize the latent parameters instead of the high dimensional stimuli. This unique combination addresses the main limitations of previous approaches to HILO in BCI: reliance on simplistic encoders [30] or very limited optimization dimensionality [55-61]. HILO proved highly effective, improving phosphene quality and robustness (Figures 3, 4). - A new phosphene model, which matches patient data significantly better (revised Table 2) and improves computational efficiency (~50x faster, ~120x less memory than [14]). - A significantly improved DSE, which performed better than the SOTA (.108 vs .12 L1 loss) [14], despite having realistic (13 vs 2) parameters of patient variations. Previous approaches require retraining for new patients, preventing HILO. > This paper … may not advance state-of-the-art in ML/DL We agree advances in this paper mainly benefit the neuroprosthesis and BCI communities. Thus, this paper targets the “Neuroscience and cognitive science (e.g., neural coding, brain-computer interfaces)” track in the call for papers. > ...the proposed framework is tightly coupled with visual prostheses. While our implementation is specific to visual prostheses, the proposed framework of 1) building a forward model, 2) training a DSE across patients and 3) learning optimal patient parameters is not domain specific. Forward models [48-51] and DSEs [52-54] have been developed for multiple sensory modalities and could potentially be adapted for HILO. > GP Hyperparameter Optimality The reviewer raises an excellent point. We consider it a strength that HILO worked using only 10 transfer learning patients, which was chosen to match clinical settings with limited human data availability. To test optimality we ran HILO again, finding hyperparameters for each patient prior to optimization using 200 random duels. Using these ‘patient-optimal’ hyperparameters, HILO performed similar to the transfer approach (Fig R3). While keeping hyperparameters constant is common [57-60], online optimization is an alternative method. We tested this using update periods of 1, 5, 10, or 20 duels (Fig. R1). All update periods eventually converged to similar performance as with transfer learning, but required more optimization time. Together, these suggest that transfer learning is a good strategy for HILO. > Comparing results with Beyeler 2019 [8] Although the previous SOTA model [11] is a simple improvement to [8], we agree that readers unfamiliar with these details would benefit from a direct comparison and have added additional baselines to Table 2 (see pdf). > Batch BO While sequential BO would be quick (~17 minutes with 100 iterations, 10 seconds each), batch-BO indeed could speed up optimization. We consider 2 options: 1) the user compares a batch of stimuli in each iteration, and 2) batches of duels are precomputed, to save on acquisition time between duels. While 1) acquires more information per response, patients must remember multiple stimuli before making comparisons, increasing cognitive burden. Future work might consider whether parallelization of data acquisition makes up for the increased difficulty. We tested 2) using batched MUC [40] with 1, 3, 6, or 10 duels (Fig R3). Batched MUC reduced acquisition time required to reach a desired performance, at the cost of requiring more duels. In practice, batch size could be determined by balancing patient response and acquisition time. Faster acquisitions (e.g. KernelSelfSparring [64]) could further improve this time. ## Questions > What does $n_e$ stand for? Number of electrodes > What is the significance of $\epsilon$, why $e^{−2}$? The phosphene model is derived so an electrode’s phosphene will have area (#pixels > $\epsilon$) of $\rho$. $e^{-2}$ was chosen based on thresholds in [11]. > Could authors confirm the hyperparameter set? For lack of space, we refer to [4] supplementary 4, who used the same kernels and hyperparameters (lengthscales and scaling factor), each with bounds of [exp(-10), exp(10)]. > s is set to be 1/0.01, whereas in Line 285, s is set to 1e-4, is this a typo? Yes, the referenced quantity should be 1/1e-4. We will modify Eq 6 and update notation through the paper. > What is the significance of $\beta$ in Eq 7? $\beta$ controls the weighting between MSE and VGG extracted features. See [14] App B for discussion. > What is $R_i$? $R_i^2$ is the coefficient of determination ($R^2$) for the ith shape descriptor (area, eccentricity, orientation). See equations 10-13 in [8] for more details. > Why is there an increasing trend in loss during the initial phase of optimization? This is a common pattern in PBO (e.g. Fig. 5 in [22]). The estimate of the maximum of the GP mean (Eq. 4) rapidly changes in initial phases of optimization, meaning the optimum estimate might worsen. This is in contrast to noise-free approaches, where the maximum is taken over previously sampled points and guaranteed to never worsen performance. > Why there is a lot of variance for OOD misspecification? Note the smaller y axis on the OOD plot, causing variance to appear larger. We have improved the visualization to use uniform scaling, showing similar variance for OOD patients (Revised Fig 4). --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their efforts in providing a detailed rebuttal. I have gone through the other reviews and the corresponding rebuttals and I expect the authors to incorporate the changes to improvise the quality of the paper. The authors have sufficiently addressed my concerns and thus I am slightly increasing my scores.
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful analysis and feedback, which has been invaluable for understanding how to improve our paper. We are pleased that reviewers in general agreed on the paper’s significance towards realistic optimization of prosthetic vision, and that it advances state of the art in this domain (YFQD, bw3i, HJZJ). We are also glad reviewers found HILO to be “steps more realistic than previous approaches” (bw3i), well evaluated (HJZJ), and robust (YFQD). While general ideas behind individual components of our algorithm have been proposed in isolation (phosphene model, deep stimulus encoding, PBO), most reviewers recognized the novelty and practical feasibility of our approach, which improves each component while integrating them into a realistic and efficient optimization pipeline (bw3i and HJZJ). Reviewers raised thoughtful questions about Batch BO and BO hyperparameters (YFQD), use of simulated data (HJZJ), and the feasibility of the approach on human patients (bw3i, HJZJ). In the following responses, we address each of the reviewers’ questions and concerns in detail. Additional experiments were conducted regarding batch Bayesian optimization and optimization of GP kernel hyperparameters to address reviewer’s feedback. We have uploaded a pdf containing figures for these experiments, a revised Table 2 with more baseline comparisons, as well as improvements in visualization schemes for original figures 3 and 4, in accordance with reviewer feedback. Reviewer feedback will be integrated into a revised version of the paper, which, if accepted, will present our results with improved clarity and include the experiments and improvements suggested by reviewers. ## Additional References for rebuttals: [46] Luo et al. 2016. “Long-Term Repeatability and Reproducibility of Phosphene Characteristics in Chronically Implanted Argus II Retinal Prosthesis Subjects.” American Journal of Ophthalmology [47] Yue et al. 2021. “Restoring Color Perception to the Blind – an Electrical Stimulation Strategy of Retina in Patients with End-Stage Retinitis Pigmentosa.” Ophthalmology [48] Dorman et al. 2005. “Acoustic Simulations of Combined Electric and Acoustic Hearing (EAS).” Ear and Hearing [49] Okorokova et al. 2018. “Biomimetic Encoding Model for Restoring Touch in Bionic Hands through a Nerve Interface.” Journal of Neural Engineering [50] Saal et al. 2017. “Simulating Tactile Signals from the Whole Hand with Millisecond Precision.” Proceedings of the National Academy of Sciences [51] Mileusnic et al. 2006. “Mathematical Models of Proprioceptors. I. Control and Transduction in the Muscle Spindle.” Journal of Neurophysiology [52] Drakopoulos et al. 2023. “A DNN-Based Hearing-Aid Strategy For Real-Time Processing: One Size Fits All.” In ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) [53] Drakopoulos et al. 2022. “A Differentiable Optimisation Framework for The Design of Individualised DNN-Based Hearing-Aid Strategies.” In ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) [54] Drakopoulos et al. 2023. “A Neural-Network Framework for the Design of Individualised Hearing-Loss Compensation.” IEEE/ACM Transactions on Audio, Speech, and Language Processing [55] Ding et al. 2018. “Human-in-the-Loop Optimization of Hip Assistance with a Soft Exosuit during Walking.” Science Robotics [56] Nielsen et al. 2015. “Perception-Based Personalization of Hearing Aids Using Gaussian Processes and Active Learning.” IEEE/ACM Transactions on Audio, Speech, and Language Processing [57] Louie et al. 2021. “Semi-Automated Approaches to Optimize Deep Brain Stimulation Parameters in Parkinson’s Disease.” Journal of NeuroEngineering and Rehabilitation [58] Lorenz et al. 2019. “Efficiently Searching through Large TACS Parameter Spaces Using Closed-Loop Bayesian Optimization.” Brain Stimulation [59] Tucker et al. 2020. “Preference-Based Learning for Exoskeleton Gait Optimization.” [60] Lorenz et al. 2021. A Bayesian optimization approach for rapidly mapping residual network function in stroke. Brain 144, 2120–2134 [61] Zhao et al. 2021. “Optimization of Spinal Cord Stimulation Using Bayesian Preference Learning and Its Validation.” IEEE Transactions on Neural Systems and Rehabilitation Engineering [62] Nanduri, Devyani. 2011. “Prosthetic Vision in Blind Human Patients: Predicting the Percepts of Epiretinal Stimulation.” ProQuest Dissertations and Theses. Ph.D., United States -- California: University of Southern California. [63] Da Cruz et al. 2013. “The Argus II Epiretinal Prosthesis System Allows Letter and Word Reading and Long-Term Function in Patients with Profound Vision Loss.” British Journal of Ophthalmology [64] Sui, Yanan, Vincent Zhuang, Joel W. Burdick, and Yisong Yue. "Multi-dueling bandits with dependent arms." arXiv preprint (2017). Pdf: /pdf/581a8129743fba42e7045fa82406d0f0ac141edd.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
On Learning Latent Models with Multi-Instance Weak Supervision
Accept (poster)
Summary: The authors define and study the problem of multi-instance partial label learning (PLL), where weak supervision is given in the form of a (potentially) unknown transition function $\sigma$, which maps the ground truth labels onto some label set $S$. Under this problem setting, the paper goes on to show multiple theoretical contributions * 1) Under a known transition function (i.e. SUM2 in the MNIST task), a sufficient and necessary condition for learnability is M-ambiguity. To get faster rates (removing the exponential term of 1/M), they show a stronger condition of 1-ambiguity (or that the output of the transition function given one label perturbation will also be changed). They generalize this to consider a top-k loss that is a more efficient surrogate * 2) They study the setting of learning multiple classifiers with shared label spaces, and provide similar learnability results under analogies of M-ambiguity for multiple classifiers, with an additional assumption on the boundedness of the risk function. * 3) Finally, they provide a similar result for learning a single classifier under an unknown transition function, which requires a new assumption about the transition space being unambiguous and the boundedness of the risk function. The paper also provides experiments to evaluate the quality of existing weakly supervised/neuro-symbolic architectures, which support their theoretical findings and takeaways. Strengths: The paper provides comprehensive theoretical results for the PLL setting. They provide results for multiple different problem setups, considering known/unknown transition functions and learning a single/multiple classifiers. They provide rigorous theoretical analysis, which requires (in my opinion) rather intuitive assumptions. Relevent experiments are nicely descriptive and verify their theoretical analysis. Weaknesses: As mentioned in the paper, scalability seems to be an issue with the experimental setting. On what seems to be a not overly complex task (a weighted sum of 4 MNIST digits), performance indeed significantly drops, which calls into question the widespread potential of the PLL setting (and not this paper in particular). While the examples are given that illustrate when the various ambiguity assumptions are violated or not violated, a more in depth discussion in the case of Definition 5 would be appreciated, as this is the most interesting setting of an unknown transition function. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Are the scalability issues as observed in Table 2 primarily due to the chosen neuro-symbolic methods, the PLL setting, or a combination of both factors? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Limitations are sufficiently addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q: As mentioned in the paper, scalability seems to be an issue with the experimental setting. On what seems to be a not overly complex task (a weighted sum of 4 MNIST digits), performance indeed significantly drops, which calls into question the widespread potential of the PLL setting (and not this paper in particular). Are the scalability issues as observed in Table 2 primarily due to the chosen neuro-symbolic methods, the PLL setting, or a combination of both factors? The scalability issue is caused by both the PLL problem and neuro-symbolic methods. Please check the section **Comments on scalability and accuracy** in our general rebuttal at the top for a detailed explanation. About the potential of multi-instance PLL and its applications, please see the **Importance of multi-instance PLL** section in the general rebuttal. Overall, we understand the criticism towards the accuracy and the scalability of the current neuro-symbolic learning techniques. However, as we discuss in the response to all the reviewers, we see these issues as an opportunity to develop new neuro-symbolic learning techniques (especially, when the transition functions are unknown) and explore new research questions. We will add this discussion in the revision of our work. > Q: While the examples are given that illustrate when the various ambiguity assumptions are violated or not violated, a more in depth discussion in the case of Definition 5 would be appreciated, as this is the most interesting setting of an unknown transition function. Thanks for the comments. We will follow your suggestion and provide a more in-depth discussion in the revised version. Below, please find a summary of clarifications. - About the intuition. This unambiguity notion requires that for each candidate transition $\sigma’$ and each diagonal label vector, flipping one of the labels leads to a different partial label $s$. This ensures that even if the learner chooses a wrong transition, it is still possible to detect the classification error by observing the partial error. - What happens when the transition space is ambiguous? If $\mathcal{G}$ is not unambiguous, then a wrong transition and an imperfect classifier may lead to zero partial risk. In other words, a wrong transition may “hide” the classification mistakes. - How to examine ambiguity in practice? Although in practice the true transition is hidden, one can examine this condition by checking a sufficient condition: for each $\sigma’ \in \mathcal{G}$ and each two different labels $\\{l_i \ne l_j\\} $, the set $\\{\sigma’(y) | y \in \\{l_i,l_j\\}^M \\}$ is not a singleton. The latter condition ensures that when given a fixed diagonal label vector, and when the classifier makes mistakes, the predicted partial labels are not unique, and hence cannot all agree with the ground truth label. --- Rebuttal Comment 1.1: Title: Reviewer response Comment: Thanks for your detailed response! I appreciate the additional discussion on scalability and the importance of the problem setting; I'll keep my score the same and wait to hear responses from other reviewers.
Summary: This paper studies the questions of learnability and generalization under an interesting form of supervision feedback: namely, when the true label of interest is not observed, but the learner instead has access to the output of a "transition function" $\sigma(y_1, \ldots, y_M)$ computed on the labels of an $M$-tuple of examples. The paper gives necessary and sufficient conditions on the transition function for learnability from two "partial" (evaluated on the aggregate/partial labels) loss functions: partial 0/1 loss and a surrogate for the "semantic loss" studied in other related literature. These results are extended in multiple directions and various details such as convergence rates are also derived. Strengths: - The results are well presented and the key condition (M-unambiguity) is shown in Appendix E.3 to be a natural extention of the "small ambiguity degree" assumption from Partial Label Learning (PLL) literature. - The authors connect the convergence rate (in terms of sample complexity) of learning to the degree of unambiguity of the transition function. - The results provide a theoretical basis for learning with the top-$k$ approximate _semantic loss_, which was introduced by other work. - The core results are extended in multiple directions, including to the case where there are multiple different classifiers and to the case where the transition function is unknown (under additional assumptions). - Studies and provides a nice set of learnability results for an interesting case of weak supervision that could be practically relevant. Weaknesses: - Core technical contributions / difficulties are not explained enough (e.g., Lemma 1 seems like key result for Thm 1, and the rest follows from standard tools?) What was the key technical challenge to proving the generalization results, and why is it novel / original? - It's not clear what practical insights are gained from the results. The experiments section seems like a proof of concept---no new method other than the "baselines" is proposed or evaluated based on the theory. The main insight seems to be the qualitative relationship between $M$ and the bound, but $M$ is not really a controllable parameter in practice. - Related to the previous point about the experiments, this setup is missing a compelling application. What's a more realistic practical scenario where learning from this type of supervision is relevant? I.e., where observing $s$ for an $M$-tuple of examples is much more practical than observing $y$ for each example? Technical Quality: 3 good Clarity: 3 good Questions for Authors: In the Remark starting at L172 the authors suggest a possible connection between the perturbation stability of the transition function and learnability / convergence rate. Could this be connected in any way to the algorithmic stability results (e.g., Bousquet and Ellisseef)? Such a connection might lead to better convergence rates? - Am I correct that the PAC failiure probabilities $\delta$ are w.r.t. the sampling of $\mathcal{D}_P$ (not $\mathcal{D}$)? It might be good to clarify this. A few questions I had while reading the intro/setup, some of which were cleared up later by the main text, but suggest that clarity/flow could be improved: L117 Learnability. What is a "partial learning algorithm"? L138 should be $\sigma_(y,\ldots,y) \ne \sigma(y',\ldots,y')$ ? L182 why do we need to "populate $\sigma$" to compute the zero-one loss? I don't fully follow. In the sum-M case, can't we just sum the outputs predicted by $f(x_i)$ and then compare to $s$? Or is the point that this is problematic if we wanted to use a *surrogate loss* such as cross entropy, so we'd need to form a distribution over the image of $\sigma$? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors describe one of the main limitations of their work (the somewhat strict assumptions required on the transition function $\sigma$ due to the worst-case distributions they consider). However, the discussion could be expanded to mention broader limitations, such as the current lack of practical insights to glean from the theoretical results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q: Core technical contributions / difficulties are not explained enough (e.g., Lemma 1 seems like key result for Thm 1, and the rest follows from standard tools?) What was the key technical challenge to proving the generalization results, and why is it novel / original? Thanks for the comment. We will clarify the above in the revision of our work. Below, we provide some clarifications. Theorem 1 builds upon several non-trivial and original intermediate results that must be shown before applying the standard VC theory: - For empirical error: A lower bound of the classification risk (Lemma 1). - For generalization error: A bound of the VC dimension for the multi-instance partial label predictor (Lemma 3). - Counterexamples for arguing the necessity of our proposed learning condition (last paragraph of the proof of Theorem 1). The proof of Theorem 2 is more involved: it relies on a variant of the Rademacher complexity from [3] and the following original results: - Lemma 1. - An inequality that bounds the top-k loss with the zero-one loss (Lemma 5), which requires the construction of an intermediate l1 loss (Definition 11). - Lipschtness of the semantic loss (Lemma 7). That result is further combined with a contraction lemma from [8] (Lemma 6) to bound the Rademacher complexity of the model. > Q: It's not clear what practical insights are gained from the results. The experiments section seems like a proof of concept---no new method other than the "baselines" is proposed or evaluated based on the theory. The main insight seems to be the qualitative relationship between M and the bound, but M is not really a controllable parameter in practice. Indeed, our work does not aim to propose new algorithms. Instead, it focuses on theoretically analyzing existing methods. However, the practical insights do not limit to the qualitative relationship with $M$. The main takeaways of our bounds for practitioners also include the following: - The learning difficulty (rate of convergence) is characterized by the “ambiguity degree” of the transition $\sigma$ (i.e., if the transition is $M$-/$I$-unambiguous), which can be determined without training. (Theorem 1 and Proposition 1). - The impact of choosing the approximation strength $k$ is not always monotone. In general, the choice leads to a tradeoff between approximation error and estimation error (Theorem 2). > Q: Related to the previous point about the experiments, this setup is missing a compelling application. What's a more realistic practical scenario where learning from this type of supervision is relevant? I.e., where observing s for an M-tuple of examples is much more practical than observing y for each example? A compelling application of our formulation is visual question answering [22], where the observation $s$ (answer) is a function of multiple hidden variables $y$ (object types and their relations). The partially labeled data $(x,s)$ is used to train a classifier for learning the mapping $x \mapsto y$. The work in [22] shows that models trained via multi-instance PLL achieve better accuracy than state-of-the-art neural end-to-end architectures. Learning latent structures from downstream observations is also of great interest in NLP [AdditionalRef1] (e.g., learning semantic parsers from sentiment classification tasks), since it has been shown to achieve better model accuracy while offering interpretability. Please also check the section **importance of multi-instance PLL** in our general rebuttal above for further comments. > Q: In the Remark starting at L172 the authors suggest a possible connection between the perturbation stability of the transition function and learnability / convergence rate. Could this be connected in any way to the algorithmic stability results (e.g., Bousquet and Ellisseef)? Such a connection might lead to better convergence rates? Thanks for pointing this out. We agree that there could be deeper connections between these two types of instabilities. Intuitively, a more stable transition will lead to a more stable partial loss and a more stable PLL algorithm. We will explore this direction in future versions of our work. > Q: Am I correct that the PAC failure probabilities $\delta$ are w.r.t. the sampling of $\mathcal{D}_P$ (not $\mathcal{D}$)? It might be good to clarify this. Yes, it is correct. We will clarify this in the revised version. > Q: L117: Learnability. What is a "partial learning algorithm"? A partial learning algorithm is one that takes the partially labeled data $\mathcal{T}_P$ as input and outputs a classifier in $\mathcal{F}$. We will improve the writing to make this definition clear. > Q: L138: should be $\sigma(y,\dots,y) \ne \sigma(y’,\dots,y’)$ It’s a typo, thanks for pointing it out. > Q: L182: why do we need to "populate $\sigma$" to compute the zero-one loss? I don't fully follow. In the sum-M case, can't we just sum the outputs predicted by $f(x_i)$ and then compare to $s$? Or is the point that this is problematic if we wanted to use a surrogate loss such as cross entropy, so we'd need to form a distribution over the image of $\sigma$? Is the latter. Populating $\sigma$ is required only when computing the surrogate loss in Section 3.2. We will improve the presentation to make it clearer from the context. --- - [AdditionalRef1] Backpropagating through Structured Argmax using a SPIGOT: Peng et al., ACL 2018.
Summary: This paper studies a weakly supervised learning scenario where supervision signals are given to sets of instances (instead of individual instances), while the goal is still to predict labels of unseen individuals. For example, the learner is provided with a dataset in which each training example comprises a set of instances $(x_1, x_2)$ (for which the gold labels $y_1 = 1$ and $y_2 = 2$ are unobservable) and an aggregate signal $s=3$ (which is calculated by a known function $s=\sigma(y_1,y_2)=y_1 + y_2$). The authors present learnability results for this scenario under certain assumptions. One main result is that if the function $\sigma$ has *M-unambiguity*, a perfect classifier can be learned as the number of training data approaches infinity. The authors further extend the results to situations where the function $\sigma$ is unknown. Strengths: - This paper concerns an important problem setting called *multi-instance Partial Label Learning* (multi-instance PLL), which is an extension of the standard Partial Label Learning (PLL) problem. - The authors introduce the notion of "M-unambiguity" and show its utility in proving the learnability of the considered problem scenario under specific assumptions. - The authors further propose the notions of "multi-unambiguity" and "unambiguous transition space" to tackle the scenarios of learning multiple classifiers and unknown transition functions. Weaknesses: - The theoretical results presented in this submission have limited significance. Generally, learnability means that a hypothesis class is learnable under any data distribution. However, the authors state that they prove learnability under distributions that concentrate mass on a single instance or label. This assumption can be invalidated in the real world. This submission primarily considers a “toughest" distribution that concentrates mass on a single instance, implying all the instances in the training data are the same. Such a distribution is unrealistic in real-world applications. Consequently, positive results under this assumption are not so meaningful. The authors may need to explore more realistic data distributions. - Theoretical aspects of the problem setting, which is called multi-instance Partial Label Learning in this submission, have previously been examined in the literature of weakly supervised learning, as seen in [1]. This setting is known as *learning from aggregate observations*. In [1], the consistency of learners is investigated, and the expected log-likelihood in [1] is close to the semantic loss studied in this submission. To avoid confusion and potential overlap, it is essential for the authors to discuss and elucidate the distinctions between these studies. - The terminology "Multi-Instance Partial-Label Learning" has been used in prior research, such as in [2]. Although [2] examines a different problem setting than this paper, to avoid confusion, it would be beneficial for the authors to adopt a distinct term when referring to the problem setting considered in this submission. - The proof of Lemma 1 appears to be convoluted. Specifically, the first inequality in Equation (26) warrants further clarification. It is not immediately evident why the assertion $R_{P}^{01}(f;\sigma) \ge \sum_{l_i\neq l_j} E_{l_i,l_j}(f)^M$ holds true. In a similar vein, could you explain the first inequality of Eq. (32) in the proof of Lemma 4? Do these Lemma rely only on the M-unambiguous condition? [1] Zhang, Y., Charoenphakdee, N., Wu, Z., & Sugiyama, M. (2020). Learning from aggregate observations. Advances in Neural Information Processing Systems, 33, 7993-8005. [2] Tang, W., Zhang, W., & Zhang, M. L. (2022). Multi-instance partial-label learning: Towards exploiting dual inexact supervision. arXiv preprint arXiv:2212.08997. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - Is the expected log-likelihood discussed in [1] equivalent to the semantic loss (or the top-k surrogate loss) examined in this submission? - Could you provide a clearer explanation for the proof of Lemma 1, particularly regarding the first inequality in Equation (26)? Could you also elaborate on the first inequality in Equation (32) in the proof of Lemma 4? Is it correct to state that the proofs of these lemmas solely rely on the M-unambiguous condition? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The limitation of the submission is that it relies on assumptions and conditions, such as a concentration of mass on a single instance or label, which may not align with realistic data distributions and weaken the applicability and persuasiveness of the results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q: The theoretical results presented in this submission have limited significance. The authors state that they prove learnability under distributions that concentrate mass on a single instance or label. This assumption can be invalidated in the real world. ... Consequently, positive results under this assumption are not so meaningful. The authors may need to explore more realistic data distributions. We kindly point out that there seems to be a misunderstanding about our results. Our ambiguity conditions in Definitions 1, 4 and 5 *do* apply to any probability distribution, i.e., if these ambiguity conditions are met, then we can learn under any distribution of the training data. The only reason we mentioned concentrated distributions is to prove that the proposed conditions are necessary (instead of just being sufficient). To put it differently, we show learnability under the toughest distributions in order to ensure learnability under any distribution. Such an approach of considering though cases is commonly adopted in PLL and general ML theory, e.g., in the classical theoretical PLL theory [26]. The reviewer can also check the definition of learnability in the last paragraph before Section 3.1. We apologize for any confusion caused and will improve our presentation to emphasize the generality of our results. Practically, deriving general learning conditions from the concentrated distributions is still meaningful. It shows that if the real-world data is not balanced but close to being concentrated on very few instances/classes, then PLL can be more difficult. Furthermore, we would like to note that our conditions are satisfied by many neuro-symbolic tasks, as discussed in Examples 2 & 3. > Q: Theoretical aspects of the problem setting have previously been examined in the literature, as seen in [1]. This setting is known as learning from aggregate observations. In [1], the consistency of learners is investigated, and the log-likelihood in [1] is close to the semantic loss studied in this submission. Thanks for pointing out [1]. We agree that the learning scenario is similar, yet our contributions remain valid. Our differences with [1] are shortly summarized below: - Our results are stronger and closer to the aims of weak supervision in comparison to the results in [1]. This is because our work proposes sufficient and necessary conditions to recover the hidden labels (i.e., $Z_{1:K}$ in [1]), while consistency in [1] concerns the likelihood of the observed labels rather than the hidden ones: as defined in Definition 3 & 4 of [1], where two parameters are equivalent if they have the same likelihood on the observed partial label (rather than the hidden labels). - The surrogate losses are different. The log-likelihood loss in [1] is different from the semantic loss in Section 3.2 of our paper. Furthermore, we allow a top-k approximation for the surrogate loss, see Section 3.2. The difference is further explained in a separate response below. - Our work directly extends the theory of PLL, which is thoroughly discussed in Appendix E through F. In particular, our learnability condition from Definition 1 directly extends the classical small ambiguity degree condition, as shown in Appendix E.3. In contrast, the connection to PLL is missing from [1]. We will follow the suggestion to include a comparison to [1] in our revision. > Q: The terminology "Multi-Instance Partial-Label Learning" has been used in prior research, such as in [2]. Thanks for pointing it out. We will consider a better name to reduce ambiguity. > Q: Is the expected log-likelihood discussed in [1] equivalent to the semantic loss (or the top-k loss) examined in this submission? No, they are different losses. To illustrate, consider Example 9 in our submission, where the induced Boolean formula is $\varphi = \{(A_{1,2} \wedge A_{2,0}) \vee (A_{1,0} \wedge A_{2,2})\}$ (recall that $A_{i,j}$ is a Boolean variable that is true iff the $i$-th input digit has label $j$). The negative log-likelihood in [1] is the sum of the probability of the two label vectors, i.e., $-\log(w(A_{1,2}) \times w(A_{2,0}) + w(A_{1,0}) \times w(A_{2,2}))$, where $w(\cdot)$ denotes the confidence of the neural classifier for the corresponding prediction. However, the semantic loss is given by $-\log(1 – (1 - w(A_{1,2}) \times w(A_{2,0})) \times (1 - w(A_{1,0}) \times w(A_{2,2})))$, i.e., the sum of the right column of Table 3. In general, semantic loss is designed to capture Boolean constraints over the hidden labels and is standard to use in neuro-symbolic learning. Notice also that our work additionally allows a top-$k$ approximation of the semantic loss, as opposed to [1] which only considers the exact likelihood. > Q: Could you provide a clearer explanation for the proof of Lemma 1, particularly regarding the first inequality in Equation (26)? Could you also elaborate on the first inequality in Equation (32) in the proof of Lemma 4? Is it correct to state that the proofs of these lemmas solely rely on the M-unambiguous condition? The first inequality in (26): By $M$-unambiguity, we know that if all the $M$ input instances have the same label and are wrongly classified as another label, then the predicted partial label will be wrong. Therefore, the zero-one partial risk is lower bounded by the probability of such events, namely the sum of the same type of classification mistake being repeated by $M$ times. The first inequality in (32): Similarly, by $I$-unambiguity, we know that if $I$ input instances in $\mathcal{I}$ have the same label and are wrongly classified as another label while all the remaining $M-I$ instances are correctly classified, then the predicted partial label will be wrong. Therefore, the zero-one partial risk is lower bounded by the RHS, which computes the probability of such events. Yes, it is correct to say that the proof of these lemmas solely relies on the $M$-unambiguity and/or $I$-unambiguity. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed clarification. Most of my concerns have been addressed, particularly regarding the differences between this work and prior works. I hope the authors can clarify the distinction and consider using a different term to avoid ambiguity. Following are my thoughts: The M-unambiguity condition proposed in this work seems both reasonable and useful, though it results in a rather slow convergence rate; The additional 1-unambiguity condition leads to a better convergence rate but seems less practical than M-unambiguity. This dilemma in this work remained unaddressed. Despite this weakness, the contributions of this work seem solid. Thus, I would like to increase my score from 4 to 5. --- Reply to Comment 1.1.1: Comment: Thanks for your feedback and reconsideration. Regarding the additional comment on unambiguity conditions: Firstly, we think that 1-unambiguity is still a practical learnability condition since it can be satisfied by many transition functions in neuro-symbolic learning, such as $M$-SUM (as shown in Example 3) and SORT, which maps a list of numbers to the sorted list. Secondly, to address the dilemma, we have proposed $I$-unambiguity in Definition 10 to relax the learnability condition while achieving moderate convergence rates. Given these reasons, we still believe in the theoretical and practical importance of our proposed unambiguity conditions.
Summary: The paper connects latent structural learning and neuro-symbolic integration and provides the first theoretical study of multi-instance PLL with possibly an unknown transition. Under such weakly supervision scenario where the transition is deterministic, it defines the necessary and sufficient condition, the minimal distribution assumptions, for the ERM-learnability of the problem. They also derive the Rademacher-style error bounds using the top-k semantics loss. The empirical results support their theory. Strengths: 1. The study provides the first theoretical analysis of multi-instance PLL with possibly an unknown transition, which is a significant contribution to the learning theory community. 2. The theory holds for both known and unknown transition cases, and the assumptions are necessary and sufficient. The experiment results also support the learnability of classifiers and the conclusion that when $M$ gets larger, the learning process gets harder. Weaknesses: 1. The author better move some discussion of the necessity to let the transition function be deterministic to the introduction part to prevent confusion since it seems too strict compared with the commonly used assumption. 2. The empirical results are monotone for the weighted sum transition function, and the authors should add some other empirical results for more known transition function cases. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. The scalability in this scenario is poor as Table 2, and it seems to be worse than the error bound given by the theory. Will this happen for the known transition function case? What may be the possible reasons? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The author mentions that scalability may be an important issue. No further negative societal impact raises. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q: The author better move some discussion of the necessity to let the transition function be deterministic to the introduction part to prevent confusion since it seems too strict compared with the commonly used assumption. We mainly focus on deterministic transitions, since our work was motivated by neuro-symbolic learning and NLP. Notice that while looking more restrictive, learning under deterministic transitions is actually more challenging, as prior learnability assumptions [4,9,26] do not further apply. We believe it is not difficult to extend our results to randomized transitions. In fact, in Appendix E.3, we present such an extension to randomized transitions when $M=1$. We will follow the suggestion to add a relevant discussion in the introduction. > Q: The empirical results are monotone for the weighted sum transition function, and the authors should add some other empirical results for more known transition function cases. Our aim was to offer a rigorous theoretical analysis of multi-instance PLL. Hence, we would like to kindly point the reviewer to prior art that presents results for learning under known transitions, e.g., NLog [41], NASP [53] and Scallop [22], that offer a more in-depth empirical analysis. For instance, [41] presents results on learning the pieces of a chessboard via supervision on the status of the kings; [53] presents results on learning an MNIST classifier using valid SUDOKU boards; and [22] presents results on visual question answering using multi-instance partial labels. Nevertheless, to contrast learning under known vs learning under unknown transitions, we have added new results for a variant of the WEIGHTED-SUM scenario that assumes known weights. Please see the **Updates on the empirical analysis** section at the top of our general rebuttal. > Q: The scalability in this scenario is poor as Table 2, and it seems to be worse than the error bound given by the theory. Will this happen for the known transition function case? What may be the possible reasons? For known transitions, there is still the issue of scalability as already pointed out by prior art. However, it will be milder, as it requires learning fewer parameters. Please check the section **Updates on the empirical analysis** at the top to see our new results on learning with known weights. The scalability issue is caused both by the problem itself (see our discussion at top general rebuttal about WEIGTHED-SUM) and the limitations of the current neuro-symbolic methods. For instance, Scallop [22] adopts more sophisticated grounding techniques and approximations that allow it to scale to larger scenarios than DLog. Please also check the section **Comments on scalability and accuracy** in our general rebuttal above for a detailed explanation. > Q: No further negative societal impact raises. Thanks for pointing out this. We will add a discussion on broader impacts in the revised version. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed reply, especially for the updated empirical results and scalability. I'm not very familiar with this topic, but I think the theory results for multi-instance PLL are solid and will be helpful for relevant research. I will keep my score and wait to discuss with other reviewers.
Rebuttal 1: Rebuttal: We would like to express our gratitude to all the reviewers for their valuable feedback. Below, we address some commonly raised issues. **Comments on the importance of multi-instance PLL raised by reviewers vhBR & KpA9**. Multi-instance PLL captures neuro-symbolic learning and latent structural learning in NLP [AdditionalRef1]. The latter settings have, in turn, several advantages over end2end neural architectures. One obvious advantage is the ability to extract and reuse the latent model. Another advantage is improved end-task accuracy. For instance, learning via multi-instance PLL can lead to architectures with higher accuracy than that of end2end architectures in NLP [AdditionalRef1] and visual QA tasks [22]. It is worth noting that several recently proposed neuro-symbolic works are based on multi-instance PLL (e.g., [11,22,25]), so we do believe that this learning setting will find more applications in the future. **Comments on our contributions raised by reviewer 6VmD**. We would like to point out a misunderstanding regarding the applicability of our results. Reviewer 6VmD claims that we prove learnability with distributions that concentrate mass on a single instance/label. However, this is not true, as our learnability results *do* apply to any distribution, not only just the concentrated ones. The only reason we mentioned concentrated distributions is to prove that the proposed conditions are necessary (instead of just being sufficient). We will improve our presentation in the revision to emphasize the generality of our results. We also provide a more detailed discussion in the individual response below. **Comments on scalability and accuracy made by reviewers a7AK & KpA9**. First, while WEIGHTED-SUM might seem an easy scenario at first sight, it is quite challenging, as the hidden space grows exponentially in the domain of the weights, i.e., $M^{10} \times M^5$ (recall that the weights are in $\\{1, …, 5\\}$). Notice also that when restricting to binary weights, WEIGHTED-SUM reduces to learning Boolean formulas with conjunction and disjunction [35] as discussed in Section 6. It is worth stressing that both scalability and accuracy significantly improved when the transition is known (see the **Updates on empirical analysis** section below), i.e., basically the original learning setting supported by the neuro-symbolic techniques considered in Section 6. Second, scalability is a known problem in neuro-symbolic learning. This is partially due to the fact that the relevant problems in logical reasoning are intractable. For instance, semantic loss [50] (adopted by DLog [29], DLog-A [30], NLog [41] and Scallop [22]) requires computing the models of Boolean formulas, which is #P-complete. Reasoning over logical theories can be also intractable [AdditionalRef2] (notice that NASP fails to support the WEIGHTED-SUM scenario for $M \ge 3$ due to its inability to ground the relevant theory), if not undecidable, while various satisfiability problems are NP-hard to solve in an exact fashion. Scalability also straightforwardly affects accuracy, e.g., NLog obtains better accuracy over DLog-A for $M=3$ (see Table 2) due to its ability to explore the whole search space. The above challenges bring up new research directions. It is worth noting that there have been promising attempts recently to tackle those challenges. For instance, the following works can be incorporated into neuro-symbolic learning techniques to improve scalability: - A scalable grounder for probabilistic Datalog: E. Tsamoura et al., “Probabilistic Reasoning at Scale: Trigger Graphs to the Rescue”, SIGMOD 2023. - A scalable approximate model counting technique: Mate Soos et al., “Engineering an Efficient Approximate DNF-Counter”, IJCAI 2023. **Updates on the empirical analysis**. To further address some of the reviewers’ concerns, we repeated the WEIGHTED-SUM scenario (Section 6) using Scallop [22]. Scallop supports the top-k semantic loss (Section 3.2). In the attached PDF, the LHS of Figure 1 shows the accuracy of the MNIST digit classifier every 1000 iterations for $k = \\{1, 2, 3, 4\\}$, when $M = 4$, $m_P = 10000$. The digit classifier has been pre-trained using 16 labeled MNIST digits, having classification accuracy equal to 58%. In contrast to DLog-A and NLog (see Table 2 in our submission), the accuracy of the digit classifier improves to 84% when $k > 1$, showing that multi-instance partial labels can improve the classification accuracy even with unknown transitions. The above result empirically shows the benefits of the top-k semantic loss approximation (Section 3.2) over approximations proposed by other neuro-symbolic techniques (see Table 2 in our submission). To address the comment of reviewer a7AK on learning with known transitions, we repeated the experiment described in the previous paragraph except that the weights are now known and randomly chosen from $\\{1, …,100\\}$ (RHS of Figure 1 in the PDF). In contrast to the unknown case, the classification accuracy exceeds 98%. To further assess scalability, we repeated the latter experiment for $M = \\{6, 10\\}, k = \\{1, 3\\}$ and $m_P = 10000$, see Figure 2. Again, the accuracy of the digit classifier exceeds 98%, a result that is quite surprising, given that the supervision signal is rather weak when $M=10$. The above shows that PLL with known transitions is, expectedly, simpler. Finally, we want to mention that the NASP [53] authors informed us about a bug in their implementation, due to which the weight classifier was not trained. We repeated WEIGHTED-SUM for $M = 2$, obtaining accuracy up to 98%. However, even after the bug fix, NeurASP could not scale for $M \ge 3$. We proceed with other individual comments below. --- - [AdditionalRef1] Peng et al., “Backpropagating through Structured Argmax using a SPIGOT”, ACL 2018. - [AdditionalRef2] J. Baget, et al., “Walking the complexity lines for generalized guarded existential rules”, IJCAI 2011. Pdf: /pdf/db9ccb1fb47e084debaacb2724b42249494a8a7e.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
AdANNS: A Framework for Adaptive Semantic Search
Accept (poster)
Summary: This paper proposes to use the Matryoshka Representations for approximate nearest neighbor search (ANNS). Matryoshka Representations provide the flexibility to adjust the budget for index search, index storage, and distance computation by changing the dimension of the used embedding. The paper instantiates the idea using IVF and OPQ, two popular techniques for ANNS, and shows impressive experiment results. Strengths: I like that this paper focuses on the end-to-end performance of ANNS, i.e., the top-1 accuracy of ANNS applications. This differs from the widely used recall, which only considers distance in the embedding space, and could be inspiring for the field. 1. The property of Matryoshka Representations is interesting and using it to make ANNS flexible makes sense. However, the authors could introduce more about Matryoshka Representations, e.g., how are they trained and what is the training cost compared with standard embedding. 2. Two examples of using the Matryoshka Representations are provided. 3. The experiment results are comprehensive and impressive. 4. The gains of Matryoshka Representations and limitations are discussed in Section 5. Weaknesses: The labels of the figures need to be enhanced to make them self-content. Technical Quality: 3 good Clarity: 3 good Questions for Authors: NA Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and feedback and are glad they found our work impressive and well suited for ANNS. We address the reviewer’s feedback below: 1) **Details on Matryoshka Representations**: We have briefly discussed the details of Matryoshka Representations in L154 - 163 and shall add more details to make it much more accessible to readers. In our experience, Matryoshka Representation Learning does not add any training overhead or hyperparameter tuning on top of a chosen representation learning setup (across models, dataset and modalities). 2) **Labels of the figures**: We thank the reviewer for their feedback on figure labels. We will address these in the next revision. We hope that the rebuttal clarifies the questions raised by the reviewer. We would be very happy to discuss any further questions about the work, and would really appreciate an appropriate increase in score if reviewer's concerns are adequately addressed to facilitate acceptance of the paper. --- Rebuttal Comment 1.1: Title: Further questions or concerns? Comment: We are happy to discuss if anything in the rebuttal needs more clarification or if the reviewer has further questions regarding the paper.
Summary: Matryoshka Representation representations have the advantage that the first m-bits of the d-dim vector can as-is serve as a good m-dim representation of the original d-dim vector. This paper demonstrates how Matryoshka Representations (MR) can be used together with approximate nearest neighbor search indices to speed up test-time inference as well as optimize for indexing cost and storage budgets for semantic search applications. The authors run extensive experiments on two datasets with up to 21 million items, carefully exploring the design space. Update: I have updated my score to 7 after reading clarifications provided by authors. Strengths: - The paper is very well-written and easy to follow (although it requires quite a few jumps to and from the appendix to really understand the results :) ). - The papers presents a clever use of properties of Matryoshka Representations for semantic search. Unlike standard dimensionality reduction methods like SVD or random projection, use of MR representations allows use of lower-dim representations when needed without additional computation to get low-dim representations. - Extensive experiments and analysis of the proposed approach. Weaknesses: - Too many hyper-parameters to optimize — This might limit wide scale adoption of proposed methods. - A major chunk of performance boost observed comes from merely using Matryoshka Representations (MR) instead of traditional dense representations (RR). For instance, in Fig. 2, IVF built with MR gives significant improvement over IVF built with RR. While proposed adaptive IVF outperforms fixed-IVF with MR, it may require an non-trivial exhaustive search of hyper-parameters. And, as show in Fig. 10, only a small fraction of these hyper-parameter settings for adaptive-IVF-MR outperform fixed-IVF with MR representations. - It would be helpful to also include a practical guide of how to choose these hyper-parameters in practice. In my opinion, this can significantly help wide-scale adoption of ideas in this work. - (Minor) The proposed adaptive search methods only work with Matryoshka Representations and can not be applied directly to any dense vector representations. This might limit the impact of the paper as the user may not have the budget or infrastructure to train MR representations especially when such embeddings are obtained form some large pre-trained vision/language models. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Is ground-truth 1-NN (as per the dense representations) used or ground-truth label (as per labelled data for downstream task) used for eval? - Can authors elaborate on why using 2k clusters with d/2 dim vectors is just as costly as using k clusters with d-dim vectors? What cost is being referred in this statement? Is it test-time inference cost or is it indexing cost? - In Fig.2, why does AdANNS-IVF perform better than AdANNS-IVF-D? Why does using a smaller dim MR embedding give better results than using full 2048-dim embedding for clustering? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and feedback. We are glad the reviewer found our work easy to follow and clever. Below, we answer the questions raised in the review: 1) **Jumps to Appendix**: Thanks for letting us know of requiring jumps to the appendix, we shall improve readability to minimize this in the next revision. 2) **Too many hyperparameters**: As mentioned in the general comment, forming optimal index can have a massive impact on cost, QPS, latency at industry scale. So, it is an industry practice to have a careful parameter search for all the hyperparameters in ANNS. So AdANNS doesn't fall beyond that practice. However, we acknowledge that choosing the optimal hyperparameters for AdANNS is an interesting and open problem that requires more rigorous examination. - *Existing solution*: Autofaiss [1] optimizes hyper-parameters of ANNS indices for the constraints at hand (i.e. the optimal compute-accuracy model for a given use case). We will publish a similar guide on top of AdANNS in the next revision. - *Hyperparameter selection guide*: We agree with the reviewer that only a small fraction of the hyperparameter space of AdANNS is above IVF-MR – Fig. 10 was intended to show the entire design search space, which is not feasible in real-world deployment. Hence, we suggest starting with IVF-MR configs and moving towards AdANNS based on the use case. We would like to mention that the gains of AdANNS-IVF come from both simple adaptivity (e.g. MG-IVF-RR is 1-1.5% more accurate than IVF-RR) and better clustering of MRs (IVF-MR is 1-1.5% more accurate than IVF-RR), resulting in an overall boost of up to 2% ground truth top-1 accuracy; even a bump of 0.5% is potentially significant in searching web-scale long tail data. We thank the reviewer for bringing up this interesting point, and shall add a brief discussion on selecting hyperparameters for AdANNS in the next revision. 3) **Requirement for Matryoshka Representations**: We acknowledge that a current disadvantage of full-potential AdANNS is the requirement to backfill MRs (Section 5.3, L405-410). We address this issue in two ways: - *Adaptivity*: As we demonstrate with the MG-IVF-RR baseline in Section 4.1 (Figure 2), where we “replicate” AdANNS with existing dense vector representations (RRs) for gains of more than 1% ground truth accuracy. This is a strong method, but is several times more expensive during both training and inference than AdANNS using MRs. E.g. MG-IVF-RR requires training 10 independent rigid models, each of which require independent inference to obtain dense vectors for ANNS. Further we can also leverage adaptivity through post-hoc compression (e.g. MG-IVF-SVD), albeit not as well as explicitly trained RRs. - *Inducing MR behavior through fine-tuning*: We agree with the reviewer that pretraining large models with MRL is expensive to train, such as in the case of large vision/language models. We have shown that Matryoshka behavior can be induced in pretrained models by fine-tuning the BERT-Base encoder on Natural Questions train set, which we use for all Natural Questions passage retrieval in this work. A more detailed study on fine-tuning to induce MR behavior is discussed in Appendix K.1 in [2]. 4) **Evaluation metric**: For all the main experiments, we use the ground-truth label to compute top-1 accuracy (available for ImageNet and NQ datasets). However, we also used ground-truth NN (w/ exact search) in k-recall@N to be consistent with the literature (defined Appendix C, L570-572) in experiments (Appendix E.2, Figure 8b and Appendix I.1, Figure 12). Overall, ground-truth label based top-1 is a harder and more representative metric than the widely used exact search NN based metrics. We will make this point more clear in the next revision. 5) **Clustering and cluster selection costs**: The complexity of both training a k-means clustering with d-dim vectors and the cluster selection for a given query scale as *O(kd)*. So both the training and cluster selection costs remain the same as k-clusters with d-dim vectors for 2k-clusters and d/2-dim vectors. We shall make this more clear in the next revision of the manuscript. 6) **AdANNS-IVF-D**: We discuss this in more detail in Section 5.1 (L363-373), but to summarize, AdANNS-D is an “emergent” behavior that has not been explicitly designed for, which leverages the properties of MRs to enable elastic search during inference. AdANNS-IVF provides more flexibility by making the entire design search space available for compute-aware deployment, and thus allows to find configurations that are more accurate than AdANNS-IVF-D. 7) **Low-dim clustering**: Due to the efficient information packing of MRs, for a majority of the data, we have sufficient information in d/2 dimensions when compared to the full d dimensions. This can be seen by the high top-1 classification accuracy of low-dim MRs on ImageNet. So when a pair of high-d and low-d representations contain similar information, it is easy to converge to a better clustering using low-d vectors owing to the curse of dimensionality. As k-means scales *O(kd)* and when k remains constant, low-d representations with similar accuracy converge faster and to a better solution in practice. We are happy to further discuss this aspect. We hope that the rebuttal clarifies the questions raised by the reviewer. We would be very happy to discuss any further questions about the work, and would really appreciate an appropriate increase in score if reviewer's concerns are adequately addressed to facilitate acceptance of the paper. [1] Paltz et al., 2023 criteo/autofaiss, Github repository [2] Kusupati et al., NeurIPS 2022 Matryoshka Representation Learning --- Rebuttal Comment 1.1: Title: Acknowledging author response Comment: Thank you for answering my questions! I have read the author response and have increased my score from `6:Weak Accept` to `7: Accept` and I would vote in favor of accepting the paper. I would encourage authors to a) update the presentations so as to minimize the need to oscillate between the appendix and the main paper and b) add more details on hyper-parameter tuning, maybe even a set of default hyper-parameters that the authors would suggest using for new applications/datasets. --- Reply to Comment 1.1.1: Title: Thanks Comment: We thank the reviewer for their response and support for the acceptance of the paper. We shall address both the points mentioned above in the camera ready version of the manuscript.
Summary: The authors introduced AdANNS, a framework that effectively harnesses the flexibility of Matryoshka Representations. This approach is applied to two fundamental components of typical ANNS systems: (a) the search data structure that stores datapoints, and (b) the distance computation that maps a given query to points within the data structure. To incorporate matryoshka representations with each of these ANNS building blocks, the authors proposed AdANNS-IVF and AdANNS-OPQ. Extensive experiments demonstrated that AdANNS achieves a state-of-the-art accuracy-compute trade-off for the two primary ANNS building blocks. Moreover, when combining AdANNS-based building blocks, superior real-world composite ANNS indices can be constructed. Despite these findings, it should be noted that the authors conducted their research on small-scale open-source datasets only. Nevertheless, they assert that the primary contribution of this research is the proposal of AdANNS, which they claim is applicable to web-scale industrial systems. Strengths: (1) Although Matryoshka Representations (MR) already contain multi-granular representations, meticulous integration with ANNS building blocks is paramount to develop a functional method, representing the key contribution of this work. (2) Through extensive experiments, it has been demonstrated that AdANNS achieves a state-of-the-art accuracy-compute trade-off for the two primary ANNS building blocks. Furthermore, the combination of AdANNS-based building blocks results in the creation of superior real-world composite ANNS indices. Weaknesses: (1) It may be beneficial for the authors to restructure the introduction. For instance, in the second paragraph, the authors delve into the details of IVF, explaining its two phases and why using the same high-dimensional Rigid Representation (RR) for cluster mapping and linear scanning could be sub-optimal. While I understand the authors' intent to introduce adaptive representations followed by matryoshka representations, I believe it would be more logical to first describe each component of ANNS and explain how adaptive representations could serve as a replacement for rigid representations. (2) It's important to acknowledge that the authors conducted their research exclusively on small-scale open-source datasets. Despite this limitation, they maintain that the main contribution of their study is the introduction of AdANNS, which they assert can be applied to web-scale industrial systems. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Why didn't you include a larger dataset for the experiments? Was it due to data availability or limitations in your implementation? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed the limitations of this work in section 5.3: To use AdANNS on a corpus, one need to back-fill the MRs of the data – a significant yet a one-time overhead. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and feedback. We are glad the reviewer acknowledged our work’s potential to provide superior real-world ANNS indices. Below, we answer the questions raised in the review: 1) **ANNS components in the Introduction**: Thanks for the valuable suggestion on restructuring the introduction. We agree that fleshing out ANNS and its fundamental components. We shall add it in the next revision. 2) **Evaluation of AdANNS**: We respectfully disagree with the reviewer that the evaluation was done on small scale datasets. While not Billion scale like in the industry, ImageNet (image, 1.3M database) and Natural Questions (text, 21M database) are strong benchmark datasets for retrieval evaluation with publicly available raw data to train new representations from scratch (which is needed for strong adaptive representations and new variants like MRs). Most of the ANNS progress was on pre-built representations with around a Million points in the database, such as in ANN-Benchmarks [1]. The more recent larger benchmarks released (Big ANN Benchmarks [2]) have about 10M database points. Both these standard benchmarks rarely have publicly available raw data – which is unlike ImageNet and NQ that are of similar scale. Further, none of the ANNS benchmarks have ground truth labels for evaluation, instead considering exact search NN as ground truth for evaluation. Because our implementations easily scale to NQ, they should work for most of these ANNS benchmarks; as shown previously [3], the nontrivial gains achieved on ImageNet-based datasets often generalize to other datasets and to web-scale. In summary, it is primarily due to the lack of publicly available raw data for the other datasets and not the limitations in implementation. We discuss this in more detail in Related Work (L130-140) and are happy to discuss this further. We hope that the rebuttal clarifies the questions raised by the reviewer. We would be very happy to discuss any further questions about the work, and would really appreciate an appropriate increase in score if reviewer's concerns are adequately addressed to facilitate acceptance of the paper. [1] Aumüller et al., Information Systems 2019 ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms [2] Simhadri et al., Big ANN Benchmarks, Github repository [3] Kornblith et al., CVPR 2019 Do Better ImageNet Models Transfer Better? --- Rebuttal Comment 1.1: Title: Further questions or concerns? Comment: We are happy to discuss if anything in the rebuttal needs more clarification or if the reviewer has further questions regarding the paper.
Summary: The author introduced an adaptive method for searching near-neighbors called AdANNS, which employs different representations of the same item at various stages of the engine. Rather than relying on traditional fixed vector representations, the authors utilized Matryoshka representations, creating a nested representation with varying dimensions for each item. Strengths: 1) AdANNS beats the traditional IVF and OPQ on accuracy and compute-resource tradeoff by proposing two main ANNS building blocks: search data structures (AdANNS-IVF) and quantization (AdANNS-OPQ). 2) The comparisons are made against the standard NN approaches IVF and OPQ. Weaknesses: 1) Limited datasets: The authors have only used ImagNet-1K and NQ datasets for vector-based retrieval. Many standard open-source datasets are available for this purpose (ANN benchmark and BigANN benchmark). A simple MLP/autoencoder can be trained for varying embedding sizes, or dimensionality reduction (like PCA, random projection) methods can be used. 2) The paper is weak in novelty. It uses the existing method Matryyoshika Representations of each vector for the near neighbor retrieval. I do acknowledge that the modern composite ANNS index framework is being modified to support the Matryyoshika Representations. This provides a gain in top-1 accuracy for a given compute budget. However, I see this as a trivial change. 3) There is no comprehensive theoretical analysis that compares the AdANNS-IVF, AdANNSOPQ, and AdANNS-DiskANN over their baseline counterparts. 4) Baselines: The baselines comparison are limited. As AdANNS can be adapted to HNSW and Scann as well, it will be good to analyze the performance improvement there. 5) Marginal gains: (Table 3 Appendix) It seems like AdANNS-IVF provides marginal accuracy gains over IVF with similar query times. This is good but not very convincing in favor of Matryoshka representations against single rigid representations. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) How are the vector sizes $d_c$ and $d_s$ decided for a particular dataset? A grid search is not practical. Similarly how the parameter $k$ is set? Is there any rule? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have discussed the limitations in section 5.3. There are no potential negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and feedback. Below, we answer the questions raised in the review: 1) **Limited datasets**: We discuss our reasoning to experiment on only ImageNet and Natural Questions datasets in more detail in Related Works (L130-140). To summarize: - *Existing ANNS benchmarks*: BigANN benchmarks [1] and ANN benchmark datasets [2] **do not provide raw data**, and we thus cannot encode MRs for AdANNS. Also they are in general datasets with <10M points. Results on ImageNet and NQ demonstrate the potential of AdANNS across modalities (image, text) and scale (1M to 21M). Finally, dense retrieval community uses NQ as benchmark and similarly, non-trivial gains on ImageNet benchmarks often generalize (Kornblith et al. [3]), even to web-scale. - *Training MLP/Autoencoder*: As both BigANN [1] and ANNS benchmarks [2] do not have ground truth labels, we would need to train an MLP/autoencoder with a reconstruction loss on the embeddings, which is not ideal to learn optimal information packing of MRs. We would also require raw data to finetune a RR encoder to an MR encoder, as we do for BERT-Base on Natural Questions train set (used for all passage retrieval experiments on NQ in this work) and is also shown by Kusupati et. al [4], Appendix K.1 - *Dimensionality reduction*: We would like to point out that PCA or Random projections do not work well in practice as shown in our experimentation (see MG-IVF-SVD in Section 4.1, Figure 2) and Table 2 in Kusupati et al. [4]. 2) **Novelty**: We strongly disagree that the AdANNS framework is a trivial change over MRL. Despite vast amount of literature on ANNS going back decades, existing methods all used a fixed representation size for all building blocks. We are proposing a shift in this paradigm, where different blocks can use different parts of MR representations thus providing one more point of control in ANNS pipeline. Given the significance of ANNS in search, retrieval augmented LLMs, we believe the improved compute-accuracy trade-offs by AdANNS can have significant downstream impact. We discuss this in more detail in the Introduction (L077-085) and are happy to discuss further. 3) **Theoretical analysis**: Thanks for the comments on potential theoretical analysis. Generally, ANNS field itself lacks significant rigorous analysis due to challenges in modeling the data and query. In particular, even famous techniques like IVF, HNSW do not have solid theoretical analysis which is mostly limited to methods like KD-trees which do not scale. Having said that, we agree that further investigation on theoretical front of ANNS is needed, and we leave it for future work. 4) **Baseline evaluation**: We respectfully disagree that our baseline evaluations are limited. We in fact already evaluated HNSW+AdANNS: show results on AdANNS-HNSW with OPQ in Appendix D (Figure 6d), where we find that AdANNS provides gains over using MRs and RRs in high compression regimes (<= 32 bytes). We further evaluate AdANNS-HNSW with a recall score analysis in Appendix I (Figure 12). Similarly, ScANN is essentially a search space partitioning with anisotropic vector quantization, which is analogous to IVF with OPQ which we have experimented with. Lastly, we have discussed the future extension of AdANNS to HNSW and ScANN in Section 4 (L278-284) and are happy to discuss in more detail. We would like to highlight that both IVF and DiskANN are state-of-the-art techniques used in billion-scale data regimes, and OPQ is a ubiquitous quantization scheme for ANNS. 5) **Marginal accuracy gains**: We strongly disagree with the reviewer that the accuracy gains for AdANNS over rigid IVF are marginal. Note that even gains of 0.5% are potentially significant for web-scale long tail data and also translate to other real-world tasks [4]. In contrast, we are able to gain up to 1.5% ground truth accuracy over rigid representations for the same query time (Appendix B, Table 3). 6) **Hyperparameter search ($d_c$, $d_s$)**: We acknowledge that choosing the optimal hyperparameters for AdANNS is an interesting and open problem that requires more rigorous examination, though we would like to point out that tuning for constraints is not a new problem in the ANNS community [3]. We find that generally speaking, the highest dimensionality that fits within a compute budget is optimal for IVF (Figure 14, Appendix I.3; Figure 7c, Appendix E.1). For OPQ, we find that across ANNS strategies the best performance comes from learning quantization on a smaller dimensionality (Section 4.2, L310-312). Lastly, we also monitor the accuracies of low-d representations on the downstream dataset to ensure only a minimal loss in information from high-d representations. We shall add these takeaways as a guide for setting the right $d_c$ and $d_s$ for a given dataset. 7) **Number of Clusters ($k$)**: It is generally accepted that a good starting point for the number of clusters $k$ is $\sqrt{N_D/2}$ where $N_D$ is the number of indexable items [5]. And $k=\sqrt{N_D}$ is the optimal choice of $k$ from a FLOPS computation perspective as can be seen in Appendix E.4. We thank the reviewer for bringing up this point, and will discuss this in the next revision. We hope that the rebuttal clarifies the questions raised by the reviewer. We would be very happy to discuss any further questions about the work, and would really appreciate an appropriate increase in the score if the reviewer’s concerns are adequately addressed to facilitate acceptance of the paper. [1] Simhadri et al., Big ANN Benchmarks, Github repository [2] Aumüller et al., Information Systems 2019 ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms [3] Paltz et al., 2023 criteo/autofaiss, Github repository [4] Kornblith et al., CVPR 2019 Do Better ImageNet Models Transfer Better? [5] Mardia et al., 1979 Multivariate Analysis p.365 --- Rebuttal Comment 1.1: Title: Further questions or concerns? Comment: We are happy to discuss if anything in the rebuttal needs more clarification or if the reviewer has further questions regarding the paper.
Rebuttal 1: Rebuttal: We thank the reviewers for their time and valuable feedback. We are happy to know that the reviewers found the paper to be very well written, easy to follow, and clever along with extensive experimentation and analysis showcasing state-of-the-art accuracy-compute tradeoff for ANNS building blocks and composite indices. However, there were some general comments we want to address here briefly before diving deep in each individual rebuttal. 1) **Further improving readability**: We shall fix the remaining few issues with the flow of the paper and try to reduce dependency on referring to external papers and the appendix for the readers when focusing on the fundamental and primary contributions. 2) **Hyperparameter search guide**: Most reviewers found a need for a hyperparameter search guide for AdANNS and we completely agree with them. While the hyperparameter search for AdANNS might seem expensive, it is not a new problem in the ANNS community which has potential solutions like Autofaiss [1]. Furthermore, because the index is formed once and is used for potentially billions of queries thus having massive implications for cost, latency and query-per-second, generally hyperparameter search for the best index is an acceptable industry practice. In case of AdANNS, starting at the best configurations of MRs followed by a local design space search would lead to near-optimal AdANNS configurations (e.g. use IVF-MR to bootstrap AdANNS-IVF). We shall add a section on this in the next revision of the paper. 3) **Limited datasets**: Another common question is regarding the evaluation of AdANNS on only ImageNet (1M and 4M) and Natural Questions (21M) datasets. We discuss this in detail as part of our Related Work (L130-140), but a short answer is that all the standard ANNS benchmarks [2,3] *do not provide raw data* to allow for the training of new representations. This makes evaluation near impossible with new representation learning paradigms. At the same time, using post-hoc compression techniques to obtain adaptive representations results in extremely poor performance as shown in this paper and MRL [4]. Lastly, even the benchmark datasets are often in the scale of 1-10M and are similar to what we have evaluated – note that ImageNet and NQ have ground truth labels allowing us to compute the hard and more representative ground truth top-1 accuracy, unlike most benchmarks. We are happy to discuss in more detail during the discussion phase. 4) **Novelty and baselines**: While most reviewers agree that enabling adaptive semantic search in ANNS is a hard problem, we want to re-emphasize that without meticulous integration of adaptive representations into ANNS building blocks and composite indices, the utility of adaptive representations is severely limited. While MRL [4] proposed a simple variant of adaptive retrieval, in practice it does not scale to any large-scale problems due to various systems issues often already addressed in modern-day ANNS pipelines. We believe that AdANNS is a non-trivial contribution that shows the utility of adaptive representations across core building blocks (that form the baselines) along with a scalable implementation. We hope that the individual rebuttals clarify the questions raised by the reviewers. We are happy to discuss any other concerns further during the discussion phase and appreciate your support for the acceptance of the paper. [1] Simhadri et al., Big ANN Benchmarks, Github repository [2] Aumüller et al., Information Systems 2019 ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms [3] Paltz et al., 2023 criteo/autofaiss, Github repository [4] Kusupati et al., NeurIPS 2022 Matryoshka Representation Learning
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Stability Guarantees for Feature Attributions with Multiplicative Smoothing
Accept (poster)
Summary: The paper is about the stability of explanations in the feature attribution setting for image classification tasks. They illustrate that in certain settings, swapping/removing a pixel of an explanation completely changes the classifiers prediction, which is undesired. They define what a "stable" explanation is formally and introduce Multiplicative Smoothing as a method to obtain these stable explanations. In principle it's multiplying the image with feature attribution masks such that the classifier becomes smooth in the sense that it doesn't suddenly (non-smoothly) change it's prediction from for example cat to goldfish. Experiments demonstrate the efficacy of MuS. Strengths: 1. It's clear what the paper is trying to achieve 2. Solution proposal is simple and thoroughly explained with good theoretical justification. Weaknesses: 1. It's a bit unclear what exactly smoothness refers to, currently it seems to refer to no discontinuous jumps when removing a feature attribution pixel in classification output? 2. The experiments are a bit unintuitive, needs clarification what's being tested here and why. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. What's smoothness exactly, never got a good intuition what this exactly means. 2. Don't understand what the experiments do, please explain how and why these experiments demonstrate that MuS is working as intended. 3. Is it correct to understand that the paper is in some sense looking for the "minimal" set up features that explains a class that won't change with some perturbation ? ----- Post rebuttal: Authors did a good job explaining what's going on, left some feedback for exposition, raised to 6, it's an interesting paper. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for their time and comments. We will accordingly update our exposition to better clarify our methods, experiments, and contributions. Below we group some of the Reviewer's comments and questions in our response. ### Weakness 1 / Question 1: Meaning of smoothness. Each coordinate of our smoothed classifier is $\lambda$-Lipschitz with respect to the binary attribution $\alpha$ in the $L^1$ norm. Recall that on binary vectors the $L^1$ metric measures bit flips. We will add an explicit definition of this in Section 2.3, and clarify this in the discussion of Theorem 3.2. Lipschitz smoothness means that whenever we mask/unmask a few features of the input $x$, the change in classifier confidence is provably bounded. In other words: if we fix $x$ and consider two binary attributions $\alpha$ and $\alpha'$, the prediction confidences of $f(x \odot \alpha)$ and $f(x \odot \alpha')$ will be similar provided that $\alpha$ and $\alpha'$ do not differ on too many bits. If the predicted class of $f(x \odot \alpha)$ is substantially more confident than the second-best class, then using any other sufficiently similar binary attribution $\alpha'$ will induce the same class. That is, $f(x \odot \alpha)$ and $f(x \odot \alpha')$ will provably yield the same class if: (1) there is a sufficient confidence gap between the first-best and second-best class, and (2) if $\alpha$ and $\alpha'$ do not differ by too many bit flips. This condition is computationally fast to check if $f$ is Lipschitz smooth, and we use MuS to guarantee Lipschitz smoothness in a sample-efficient and deterministic manner. ### Weakness 2 / Question 2: Clarity of experiments. We thank the Reviewer for raising this concern. We will amend our experiments section to better clarify each experiment. **Experiment 1 (Section 4.1):** This experiment shows that off-the-shelf classifier-explainer pairs can indeed achieve non-trivial stability guarantees. This is a significant result since there did not previously exist any stability-like guarantees for explanations. Moreover, our chosen classifiers and explanation methods were not customized for stability, so it is remarkable that MuS can still prove non-trivial stability in a large number of cases without much custom engineering. As our proposed method is classifier and explainer-agnostic, we do not investigate new model architectures nor explanation methods in this paper, and instead opt to use existing ones to test MuS. **Experiment 2 (Section 4.2):** This experiment evaluates the accuracy degradation of the smoothed classifier as a function of $\lambda$. Because smoothing methods inject noise when evaluating the original ("base") classifier, it is expected that greater smoothness (i.e. smaller Lipschitz constants) is achieved at a cost to smoothed-classifier accuracy. Much of smoothing literature, ours included, is concerned with navigating this smoothness-accuracy trade-off. Figure 5 (see also Section 2 of Rebuttal Supplementals) shows the certified accuracy curve of different classifiers under different amounts of noise, where the far-left value (in parentheses) is the overall accuracy of the smoothed classifier. In particular, the $\lambda=1/16$ curve is effectively a stress test of when the smoothed classifier is subject to extreme amounts of noise, higher than what is common in smoothing literature. It is at these extreme noises that one begins to see non-trivial accuracy degradation in the smoothed model. **Experiment 3 (Section 4.3):** This experiment does not directly evaluate the efficacy of MuS. Rather, it evaluates which of our four chosen feature attribution methods (Vanilla Gradient Saliency, Integrated Gradients, LIME, SHAP) are best suited for achieving certified stability if naively applied. In particular, we interpret higher attribution values to mean more important features, and we iteratively select the next-most-important feature into $\alpha$ until a stopping condition is achieved (consistency, radius-1 incremental, and decremental stability). The resulting $\alpha$ is then a proxy for the relative "efficiency" or "density" (the $k/n$ measure) of our chosen attribution methods. As stability-like guarantees do not exist for either continuous or binary attribution methods, we opted to simply use the most popular ones (which happen to be continuous) as a proof of concept test for efficiency. We remark that the construction of this $\alpha$ is technically a feature selection method, but we emphasize that this is only intended for crudely testing our four chosen methods. ### Question 3: Objective of paper. We believe there is a misunderstanding, and we will revise our exposition to better clarify our story. The focus of our paper is on analyzing and proving guarantees given an explanation. Our goal is not to find a minimal feature selection, nor to present a feature selection algorithm. We are agnostic to the selection method, which we assume is given. We do this because there do not yet exist feature attribution methods with useful explanation-relevant formal guarantees, so it would be premature to overly fixate on any particular one. We study binary feature attributions and ask: what properties are desirable, which conditions allow us to provably satisfy these properties, and how can we achieve said conditions? Our desired properties are stability (Definition 2.2), incremental stability (Definition 2.3), and decremental stability (Definition 2.4). Because our notion of stability is hard to show in general, we focus on the latter two: the condition that allows us to provably guarantee incremental/decremental stability is Lipschitz smoothness (Remark 2.6 and Theorem 3.3). We achieve Lipschitz smoothness using MuS (Theorem 3.2). Along with a sample-efficient noise construction in Section 3.3, we present a computationally efficient method for proving when a binary attribution is incrementally/decrementally stable. --- Rebuttal Comment 1.1: Title: Re: Rebuttal Comment: Thanks for the clarifications! It's a bit more clear now what you're trying to achieve, it would be appreciated if you could associate the definitions with perhaps more figures or illustrations providing the intuition for stability. Happy to buy in and raise to 6.
Summary: This paper aims to make the classifier robust to feature removal and addition. The authors find that adding a patch to the mask obtained from explanation method may cause the classifier to make substantially different prediction. They introduce the notion of incremental stability and decremental stability to measure classifier's stability w.r.t mask change. They introduce MuS to any base classifier and show that MuS provides stability guarantee for this classifier. Strengths: 1. The notion of incremental stability and decremental stability is novel. 2. They motivate multiplicative masking instead of the widely-adopted additive masking by pointing out the inconsistency involved in additive masking. 3. Certified stability guarantee is obtained. 4. A sample-efficient algorithm is introduced to reduce the sample complexity to sample from Bernoulli distribution. Weaknesses: 1. Although masks can be sampled efficiently, to get the final prediction, a lot of inference steps are involved. This might be time-consuming. The authors do not talk about how this could be tackled. 2. MuS seems to be similar with Randomized Smoothing with Bernoulli distribution. By averaging predictions under several masks, the classifier use information from the whole image to make prediction. It is not too surprising that the classifier is more robust to mask change. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Are robust models also sensitive to mask removal or addition? 2. There are many works discussing robust attributions (e.g., [1], [2]) and establishing the connection between the robustness of explanations and the robustness of black-box models (e.g., [3], [4]). What's the relationship between these works and this paper? [1] Dombrowski et al., Explanations can be manipulated and geometry is to blame, NeurIPS 2019 [2] Wang et al., Smoothed Geometry for Robust Attribution, NeurIPS 2020 [3] Tan et al., Robust Explanation for Free or At the Cost of Faithfulness, ICML 2023 [4] Agarwal et al., Rethinking Stability for Attribution-based Explanations, ICLR 2022 Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Listed is Weakness and Questions sections. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for the helpful comments, questions, and references. We will include additional exposition and discussion to better clarify our sample complexity, especially relative to other smoothing methods. Moreover, we will include discussions on the listed references, which we believe will help us better clarify our position in the context of ML-explainability literature. Additionally, we have attached a Rebuttal Supplemental with material relevant to our response. ### Weakness 1: Sample efficiency of inference. The Reviewer is correct in identifying that requiring multiple model evaluations is a fundamental bottleneck for smoothing methods. We will emphasize in our revisions that our sample efficiency is significant relative to other smoothing methods: we require $<100$ samples to get deterministic guarantees whereas other smoothing methods often require $>10000$ samples to get probabilistic guarantees. ### Weakness 2: Similarity with Randomized Smoothing from Bernoulli Distribution. The Reviewer makes a good observation, and in fact, a Bernoulli distribution was one of our intermediate steps when developing MuS. We again emphasize that our formulation of MuS (Theorem 3.2) critically does not require coordinate-wise independence in its distribution (see also above discussion of Weakness 1). The advantage of this over a Bernoulli distribution is that we can then deterministically evaluate the smoothed classifier in $\ll 2^n$ samples via the construction in Section 3.3 (inspired from Reference [27]). We agree that for image classifiers especially, it is not surprising that a Bernoulli-like noising should preserve decent accuracy. Indeed our experiments, namely Figure 5, affirm this intuition (see also new version in Section 2 of Rebuttal Supplementals). Because using a Bernoulli distribution seemed so simple, we do not claim novelty on this idea alone, although we could not find other papers that explicitly discuss this. Instead, we only claim novelty on the distribution of Theorem 3.2 in conjunction with the sample-efficient construction in Section 3.3. ### Question 1: Robust model sensitivity to mask addition and removal. The Reviewer raises an interesting question, and we have run additional experiments with robust models. In particular, we consider four versions of pre-trained ResNet50 from the "robustness" Python package: non-robust ("base"), $L^2 (\varepsilon = 3.0)$, $L^\infty (\varepsilon = 4)$, and $L^\infty (\varepsilon = 8)$. As we cannot provably certify the stability of non-MuS models, we instead empirically test how often they are able to hit radius $r=1$ of incremental stability with and without using MuS. We use SHAP-top25% for the explanation method and take $N=250$ samples from ImageNet. Our $r=1$ incremental stability rates for the four models are listed below and also included in Section 3 of the Rebuttal Supplementals: Without MuS: 0.208, 0.176, 0.180, 0.232 With MuS: 0.324, 0.432, 0.356, 0.448 We observe that using MuS uniformly improves the empirical incremental stability rates. ### Question 2: Discussion with robust feature attributions. We thank the Reviewer for providing the references, and we will include discussions of these in our revised manuscript. An important difference is that we consider Lipschitz with respect to input maskings (expressed as $L^1$ Lipschitz in $\alpha$), whereas the listed references focus on adversarial robustness with respect to $L^p$ norms on continuous domains. This means that the domain and metric over which we define Lipschitz smoothness is fundamentally different than that of the listed references. An important consequence is that one unit of radius in our case easily achieves orders of magnitude more coverage on input features than that of the standard $L^p$-robustness case. For instance, in our examples and experiments we partition 3x224x224 images ($n=150528$ features) into 64 patches: one unit of radius in our setting then certifies 1/64 of the total 150k+ features. This is significantly more feature coverage compared to what is typical from $L^p$ smoothing-based methods, which often struggle to certify more than tens out of 150k+ features. We emphasize that this coverage improvement is due to our definition of Lipschitz-to-feature-maskings being fundamentally different from Lipschitz-to-$L^p$ norms on continuous spaces. Although $L^p$-robustness can in theory be used to certify properties of feature maskings, the certifiable radius is often too small to be useful. Crucially, our motivation to pursue this feature-masking variant of Lipschitz smoothness, and therefore MuS, is due to the insufficiency of common additive smoothing methods in our setting. This is because additive smoothing methods violate a condition that we call masking equivalence (Proposition 3.1), and therefore make them inadequate for use in the stability analysis of our context. --- Rebuttal Comment 1.1: Title: Response Comment: Thanks for the detailed reply! I will raise my confidence score to 4. I have a few follow-up questions and I hope you can give some further explanations. 1. To obtain a prediction, how many inference steps are required? If ~100 inference steps were required, would the smoothing step be very slow? 2. In the experiment results you provide, it seems that with MuS, non-robust model has a higher hitting rate than $L_2 (\epsilon=3), L_\infty (\epsilon=4)$ trained robust models. Do you have any explanation on this result? --- Reply to Comment 1.1.1: Comment: **Q1. Cost of inference steps.** Yes. If $q=100$ samples are used, evaluating the smoothed model would be $\times 100$ slower than the base model. Our method for constructing the distribution allows us to decide the number of samples $q \geq 2$, and we arbitrarily set either $q = 64$ in most of our experiments and sometimes $q = 16$. In developmental testing, we have also found $q = 8$ to perform well (i.e. certified accuracy like Figure 5). Nevertheless, the Reviewer raises an interesting set of additional experiments to run for $q$ selection and we will include them in our revised manuscript. Below is a sample of the certified accuracy numbers (like Figure 5 / Section 2 of Rebuttal Supplementals, there both ran with $q=64$) when we run Vision Transformer at $\lambda = 1/8, 2/8, 3/8, 4/8$ for sample complexities of $q = 4,8, 16, 32, 64, 128$. On a $N = 2000$ sample from ImageNet we obtain the following smoothed classifier accuracies: $\lambda = 1/8$: N/A, 0.662, 0.688, 0.693, 0.695, 0.691 $\lambda = 2/8$: 0.717, 0.743, 0.749, 0.754, 0.752, 0.755 $\lambda = 3/8$: N/A, 0.760, 0.774, 0.777, 0.782, 0.778 $\lambda = 4/8$: 0.776, 0.779, 0.793, 0.790, 0.791, 0.795 Note that some values for $q = 4$ are N/A because this is an insufficient discretization granularity for specifying the corresponding probabilities at $\lambda = 1/8, 3/8$. We observe that in general, increasing the value of $q$ slightly increases the accuracy of the smoothed classifier. This may be because, heuristically, smaller values of $q$ mean that the smoothed classifier is more susceptible to the choice of initial random seed. **Q2. Robust vs non-robust model performance.** We were also surprised by the non-MuS performance of the different models, and we can only speculate as to the cause. One possibility is that some radii used in $L^p$ robustness training were too small to generalize to our setting. Our finding that $L^\infty (\varepsilon = 8)$ did in fact outperform the non-robust case is some preliminary evidence of this.
Summary: In this paper the authors have introduced a framework to measure the stability of feature attribution methods. They do so by introducing two relaxed notions of stability called incremental stability and descremental stability which check for stability in a neighbourhood of the original feature set. They show that size of these stable neighbourhoods can be measured for lipschitz smooth classification functions and then present a methdology to convert any classified into a Lipschitz smooth classifier using random sampling. Finally they conclude the paper with numerical experiments to illustrate the benefits of their approach. Strengths: I feel that this paper has several strengths. Understanding the stability of feature attribution methods is a key requirement to evaluate them. This paper presents a novel and efficient approach to understanding these stability metrics which can then be used to compare quality of the features identified by the different methods. The stabiilty metric makes intuitive sense and the theoretical results justify the measurement techniques for them. They also present a way of how this stability metric can be extended to any classifier. Overall, I believe this paper makes a key contribution in this area. Weaknesses: The metric seems more descriptive than prescriptive. It would be difficult to some how incorporate he metric into the optimization problem so as to guide the network and method towards more stable models and feature attributers. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I think it is important to provide a discussion about what the impact of Mulitplicative smoothening is on the original classifier. How is the stability metric of the smoothened classifier associated to the original classifier. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for their time and comments. We will include additional discussion based on these comments, especially regarding how practitioners may apply and train for stability. Moreover, we remark that we have also attached a Rebuttal Supplemental document with additional examples and experiments. Below we address the Reviewer's comments. ### Weaknesses: Descriptive vs prescriptive metric; clarity for practitioners. The Reviewer makes a valid point that an optimization problem formulation is useful for practitioners looking to work with MuS and stability. We initially omitted an optimization formulation because we encountered a lot of difficulties in its specification, as any such formulation must consider many problem-dependent nuances, described below. Nevertheless, we believe that the Reviewer makes a strong case for the utility of an optimization formulation, even if in a restricted setting. We will include in the Appendix some formulations and also present discussions about implementation details. Below we further discuss the details and challenges of an optimization formulation for stability. The primary challenge with expressing stability metric as an optimization problem is that it is necessarily dependent on both the classifier model and explainer method. As such, any optimization formulation that maximizes stability should simultaneously train both the classifier and explainer. If one decides to parametrize and train the explainer, this opens a Pandora's box of feature attribution design, which is beyond the scope of our paper. If one fixes the explainer and optimizes only the model, then the optimization formulation is easier to write, although this is a more restrictive case as it neglects the explainer design. Nevertheless, we agree that the latter would be useful for practitioners looking to apply MuS, and we will include a model-only formulation in the Appendix. ### Questions: Impact of MuS and relation of stability metric with respect to the original classifier. We do not provide any guarantees for the original classifier. Our guarantees are only with respect to the smoothed classifier. Because the original classifier may be an arbitrary $h : \mathbb{R}^n \to [0,1]^m$ function, it is challenging to establish any kind of formal guarantees. Only by smoothing the original ("base") classifier can we establish any kind of non-trivial guarantees. However, it would be interesting to consider what additional properties we could guarantee if one could assume certain qualities about the original classifier. --- Rebuttal Comment 1.1: Title: Response Comment: I thank the authors for the detailed response. I will maintain my "weak accept" review.
Summary: This paper presents a technique for extracting feature attributions that are certifiably stable in the sense that the model's predictions are consistent on supersets of attributed features. As the title suggests, the approach is based on multiplicative smoothing, a novel type of Bernoulli smoothing based on masking, rather than perturbing, input features. Notably, the paper shows how to construct a distribution with coordinate-wise Bernoulli features that requires significantly fewer samples to obtain a certificate. Strengths: This work introduces a new type of attribution stability that is not addressed by prior work on robust feature attribution, and presents rigorous yet practical techniques for obtaining stable explanations. As the core technique is based on randomized smoothing, it is applicable broadly to different types of models and even different feature attribution methods. Thus, it complements a large body of existing work, and gives concrete, practical ways to improve it with respect to stability. While smoothing-based certification methods are typically very expensive, the structured dependency sampling described in section 3.3 gives a clever way to avoid this, making the approach reasonably inexpensive. The empirical results give a sense of what the guarantees look like on real data, and compare several widely-used attribution methods under the certification regime. The writing is polished and clear on most of the important points. Overall the paper is an enjoyable read. Weaknesses: The accuracy penalty imposed by multiplicative smoothing is large (a related small point: the horizontal axis between figures 4 and 5 isn't the same, making these results a little harder to line up). It doesn't seem that the models were trained with augmentations that match smoothing noise, or that denoising was used on smoothing samples, so there may be opportunities to improve on this. Without improvements, the results in Figure 5 pull somewhat against the practical significance of the work. The writing is a bit unclear around the sufficiency of Lipschitzness and masking equivalence. A less-careful read of the paper that focuses more on early sections might come away with the impression that most techniques for obtaining Lipschitz models will suffice for the stability guarantees sketched in remark 2.6. The authors might consider moving some of the intuitions from proposition 3.1 into this discussion. The notion of stability at the center of this work is different than what I would have expected, being familiar with some prior work on this topic and reading the abstract. I agree that the model needs to play a central role in characterizing the stability of explanations, but it's less clear that being stable to added features is always an important property. The paper might have wider appeal if the authors are able to justify this, using examples or otherwise (Figure 1 is indeed an example, but doesn't address the question of why the classifier's behavior over the right two images is harmful or undesirable). Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Were the models used in figures 4 and 5 trained on appropriately augmented data? 2. Have you attempted to use denoising methods as an intermediate step after sampling from the smoothing distribution? 3. Are there other existing methods for obtaining Lipschitz models, perhaps without smoothing, that could also work with your certificates? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The paper does not have an explicit limitations section, but these points are discussed where it is appropriate. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for their positive impression of our paper and helpful comments on how to improve readability, especially on the notion of masking equivalence and our particular definition of stability. Moreover, since our submission, we have fine-tuned our models for longer and rerun a number of experiments, generally yielding substantially better numerics, especially on ResNet50. We include some examples in the Rebuttal Supplementals, and we respond to each of the Reviewer's comments below. ### Weakness 1 / Question 1: Accuracy penalty and training with augmented data. After our submission, we have had the opportunity to rerun some of our evaluations under more generous time constraints. In particular, the original numbers for Figures 4 and 5 were run with models after only one epoch of fine-tuning on the properly augmented data. Our rerun experiments use models with five epochs of fine-tuning, and we achieve better results, substantially so for ResNet50. We have included some of the new figures in Section 2 of the Rebuttal Supplementals. In addition, we have reworked the discussion to better emphasize that the case of $\lambda = 1/16$ is essentially a stress test since this tends to be more noise than what many smoothing papers employ in their experiments. Notably, because smoothed variants of Vision Transformer tend to outperform the smoothed ResNet50 counterpart, we will include additional discussion of how these experiments can inform model selection for potential users of MuS. ### Weakness 2: Exposition on the sufficiency of Lipschitzness and masking equivalence. This is a great suggestion on where to position the presentation on masking equivalence. We will work this into our revised manuscript. ### Weakness 3: "Non-standard" notion of stability and its desirability. The Reviewer makes an acute observation that the naming of "stability" is tricky. Indeed, we considered other possible names (e.g. "monotone", "sufficient", "convincing", "upwards-consistent", "robust", etc) but each choice came with its own advantages and disadvantages. In the end, we felt that "stable" was the most descriptive name that was the least potentially confusing, even though it risks conflation with terminology from adversarial robustness. We will revise the exposition to better distinguish and clarify our definitions. We also thank the Reviewer for pointing out that the benefits of stability are not sufficiently well-motivated. We plan to better motivate the need for formal guarantees by using more examples from medicine. In particular, we have constructed an example using chest X-ray images (see Section 1 of Rebuttal Supplementals) to use as a non-trivial example in our Introduction. We believe that this should better motivate readers outside of the ML-explainablity community. ### Question 2: Denoising steps. We have not attempted to use denoising methods as an intermediate step. This is a great suggestion and seems like a viable strategy for improving performance. Importantly, however, directly applying additive $L^p$-based denoising methods would not work in our setting, as our multiplicative noise drops features. An appropriate "denoiser" in our noising scheme therefore needs to fill in the dropped features via some data imputation method. Investigating such a denoiser is an interesting future direction. ### Question 3: Other methods for obtaining Lipschitz models. We are not aware of other methods for obtaining Lipschitz models that conform to our criteria for stability. To our knowledge, our method is the only one among smoothing-based methods. We are not aware of any non-smoothing approaches, but this would be an interesting direction for future research. --- Rebuttal Comment 1.1: Title: Reply to rebuttal Comment: Thank you for taking the time to write this detailed rebuttal. Your suggestions for addressing the weaknesses mentioned in my original review look reasonable. A few comments on your answers to my questions below. * [Q1] I'm glad to see that training with augmentation improves things! * [Q2] Certainly, additive denoising isn't right. Something based on a diffusion model, along the lines of [1], could be promising. * [Q3] Agreed, and to be clear, I believe that this paper stands on its own either way. I'll maintain my score and support for this paper. [1] Nicholas Carlini, Florian Tramer, Krishnamurthy Dj Dvijotham, Leslie Rice, Mingjie Sun, J. Zico Kolter. (Certified!!) Adversarial Robustness for Free! https://arxiv.org/abs/2206.10550
Rebuttal 1: Rebuttal: We thank the Reviewers for their time and constructive feedback. The Reviewers have raised many insightful comments and useful suggestions on how to improve the expositional narrative, technical presentation, and experimental evaluations. In addition, the Reviewers have also suggested a number of additional examples and experiments that we believe will help bolster the presentation of our paper. We have attached a Rebuttal Supplemental material that includes a selection of the relevant example and experiment figures. Pdf: /pdf/2c43999278b90e1b3d6ae785375f579cfbaf39fc.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The draft addresses an important problem of feature attribution method selection and formalizes a notion of attribution stability to do this. The approach is based on modifying classifiers to make them Lipschitz wrt to feature masking. Once done, the new classifier has provable radii of stability defined as L1 distances within which the found sparse explanation is sufficient and adding new features will not change the prediction. The approach is tested on vision and NLP tasks with four different explanation methods. Despite some weaknesses the paper is a good contribution to the field of feature attribution explanations. Strengths: - new measurable definition of explanation stability - a method to convert any classier into a one that permits the stability calculation - theoretical analysis; efficient computation - comprehensive empirical evaluation Weaknesses: - The need to modify the classifier, before guarantees can be calculated, somewhat diminishes the practical value. In practice, we would be interested in the classifier that actually does the prediction, not a smoothed version of it; as the authors in Sec. 4.1 measured the drop of accuracy from the smoothing to be tens of %s, it could be a no-go for critical classifiers. - The proposed stability is operationalized only for binarized explanations and features as image patches, which throws the classifier into the out-of-distribution regime and, on top of smoothing, further complicates drawing conclusions from the approach. In fact all feature attribution methods compared with proposed stability are continuous and require binarization before they can be used. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - typo in the last word of line 44 - nitpick: logits are usually defined as pre-softmax values, not probabilities (line 71) - Figures are not readable in black and white Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: No dedicated Limitations section but limitations mentioned in the text Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for their positive reception of our paper. We have rerun some experiments under more generous time budgets to improve our experimental results and better address the highlighted weaknesses. We include some of these in the Rebuttal Supplementals, and we address each of the Reviewer's comments below. ### Weakness 1: Need to modify the classifier and weakness of our Figure 5 evaluations. We thank the Reviewer for highlighting a need to better discuss this weakness of smoothing methods, especially in the context of our initial experiments in Figure 5. After our submission, we reran a number of our experiments after fine-tuning our models for more epochs and observed substantial performance gains, for instance on experiments of Figure 5 (see Section 2 of Rebuttal Supplementals). Although the new numbers are improved, they do still exhibit similar trends, and we comment on both the old and new numbers here. The Reviewer is correct in pointing out that under greater noise (i.e. $\lambda = 1/16$) there is non-trivial accuracy degradation. Fortunately, these are much improved in our new experiments. We will revise our discussion to also emphasize that the extreme-noise case is effectively a stress test, as $\lambda = 1/16$ is more noise than what many smoothing papers consider. Moreover, we will also include commentary on the fact that one unit of radius in our setting can certify a much greater amount of perturbation to the input than what classical additive smoothing methods can achieve. For instance, we segment 3x224x224 ($n=150528$ features) images into 64 patches, so one unit of radius is then 1/64 of the image. By contrast, many additive smoothing methods struggle to certify more than tens out of the 150k+ features. This difference is largely due to the fact that we are Lipschitz with respect to input feature maskings (which we express as $L^1$-Lipschitz to the $\alpha$), rather than $L^p$-norm Lipschitz to the input. In addition, we have expanded our discussion to state that the experiments of Figure 5 may be used to guide model selection for potential users of MuS. In particular, at least on vision-based problems, Vision Transformer experiences higher smoothed accuracy compared to ResNet50. This shows that transformer-based architectures may be more compatible with MuS than CNN-based models. ### Weakness 2: Focus on binarized explanations. The objective of our work is to achieve provable guarantees for feature attributions. To do so, we first focus on the simpler case of binary attributions. Importantly, explanation-relevant guarantees currently do not exist for either continuous-valued or binary attributions. The Reviewer is correct in identifying that the continuous-valued case is more general, however, it is not trivial as to (1) what kind of formal properties should be imposed on continuous-valued attributions, and (2) how one may achieve such properties in a computationally reasonable manner. For instance, it is not clear to us how one should specify properties like stability for continuous-valued attributions. We thus focus on the binary case first. ### Questions: Typos, wording, etc. We thank the Reviewer for pointing these out. We have corrected the typo on L44, as well as others that we have found. We have also reworded "logit" to "class probability" and "confidence" where applicable, and we believe that this change also makes the paper more accessible to a non-ML audience while improving its technical clarity. Moreover, we will modify our figures to improve black/white readability where possible. We show in Section 2 of the Rebuttal Supplementals of one such example with different line styles. --- Rebuttal Comment 1.1: Title: Thank you Comment: Thanks for the detailed reply - keeping the Accept note. (review slightly edited for clarity/typos)
Summary: This paper studies the stability of binary attributions. The attribution is defined to be stable if the prediction does not change when adding additional features. The multiplicative smoothing is proposed to achieve the Lipschitz condition, which is proved to infer the relaxation of stability. Experiments verify that the smoothing technique is useful for stability. Strengths: 1. The paper is well-written and easy to understand. The problem of attribution stability is novel. 2. The proposed multiplicative smoothing is effective under the defined stability setting and can be applied to any models. Weaknesses: 1. For the smoothing technique, my main concern is its application scope, which currently focuses only on binary attribution methods. Although a method is proposed to convert non-binary attributions into binary ones, this conversion process disregards the continuous information in the attribution maps. The potential loss of information during this conversion is significant, especially when dealing with features grouped into 64 patches. 2. The motivation behind this work can be emphasized further. While it is mentioned that explanations can be fragile, more details are necessary to inform readers about the specific circumstances under which explanations may fail. In order for the stabilty of explanations to be meaningful, classifiers must rely solely on attributions, or more features added on attributions, to make correct decisions. However, it remains unclear why practitioners would opt to use attributions for decision-making when they already have access to the original input data. 3. The accuracy drop can also be a weakness of the proposed method. While the decrease is expected, a significant decrease in accuracy may suggest a limited usefulness of the method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I'm interested in the performance comparison of additive smoothing and multiplicative smoothing on the stability setting. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for their constructive comments. We will include additional exposition, discussion, and examples in our revised manuscript to better motivate our work and clarify our contributions. Furthermore, we have attached a Rebuttal Supplemental document with relevant material. ### Weakness 1: Application scope of binary attributions. The Reviewer accurately notes that the continuous-valued case is indeed more general. However, it is also more challenging: it is not clear how to formally specify many explanation-relevant properties, nor how one may achieve such properties. For instance, it is not clear to us how stability-like properties should be formulated for the continuous case. Surprisingly, formal explanation properties of any kind are rarely studied, even in the binary case. We therefore begin with binary attributions as a first step towards explanation stability. Moreover, we do not aim to introduce novel binary attributions. Rather, our simple binarization schemes are solely intended to show a proof-of-concept that one can certify stability guarantees via existing methods. Importantly, to our knowledge, there do not exist any continuous nor binary attribution methods that explicitly consider our notion of stability. We therefore test MuS with very popular methods (e.g. LIME, SHAP, etc), which all happen to be continuous-valued and are far more well-known than any binary variant. ### Weakness 2a: Classifier must solely rely on attributions. The Reviewer's observation is correct. In fact, this highlights an important question: how does one evaluate the "quality" of a feature selection? Our choice for quality evaluation is most natural in the context of vision models, wherein revealing parts of an image is analogous to revealing information of the input. If another quality metric were used (e.g. Reference [15]), then our stability definition will no longer apply. ### Weakness 2b: Usefulness of feature attributions. A major ongoing use case of feature attributions is in the application of machine learning to medicine [1,2,3]. Many such explanation methods are already deployed, but there is often a glaring absence of formal guarantees. This is especially concerning, because such explanations are intended to help medical personnel make critical decisions about patient care. As we cannot expect a general user to understand the technical nuances of ML explanations, it is important that these explanations should be engineered with useful formal guarantees by construction. Our work on explanation stability is a step in this direction. We will also include a medical example from Section 1 of the Rebuttal Supplementals. Here, a detector from TorchXRayVision predicts an image to have pneumonia with 25.5% confidence. Suppose a doctor then observes two similar feature selections with dramatically different pneumonia detection confidences (e.g. 29.56% vs 50.05%), they may lose trust in the model and explanation. The objective of our work is to analyze and understand how to prevent such scenarios. ### Weakness 3: Accuracy drop of smoothing. Since our submission, we have fine-tuned the models for more epochs and obtained better evaluation numerics. An example of one such new improvement is in Section 2 of the Rebuttal Supplementals, in which we rerun the Figure 5 experiments and observe substantial accuracy improvements, especially for ResNet50. We will also emphasize that these numbers include heavy noise stress tests (i.e. $\lambda = 1/16$). This is a higher noise than what is typical in smoothing papers and leads to very small $1/16$-Lipschitz constants. Moreover, we will include discussion that the relative difference in smoothed classifier accuracy of these experiments are useful in guiding model selection for MuS users. In particular, this experiment demonstrates the relative advantage of transformer-based architectures over CNN-based ones on vision tasks. ### Question 1: Comparison with additive smoothing. We thank the Reviewer for suggesting this experiment. Note that classic additive smoothing approaches, which add noise directly to the input, use a radius that is typically too small to cover the deletion of even one feature, so we would not expect it to work well. We verified this in the following experiment similar to Section 4.1, with $L^1$-smoothing noise from distributions as in Reference [21, 27]: We use SHAP-top25% and test how often Vision Transformer is incrementally stable with radius $r \geq 1$ on $\lambda = 8/8, 4/8, 2/8, 1/8$. We use $N=250$ samples from ImageNet. The numbers are as follows, and are also plotted in Section 3 of the Rebuttal Supplementals: Add: 0.184, 0.320, 0.360, 0.156 MuS: 0.716, 0.660, 0.512, 0.364 This shows that MuS consistently outperforms classic additive smoothing in terms of achieving radius $r \geq 1$ incremental stability, as expected. We will include this experiment and discussion in our revised manuscript. We note that the Reviewer may be also interested in using additive smoothing not in the classical setting, but to directly smooth the $\alpha$ parameter in the explainable model framework from our paper. However, obtaining stability in this way is not theoretically possible, which is formalized in Proposition 3.1. This is precisely why we had to develop multiplicative smoothing. ### Supplemental References: [1] A. M. Antoniadi, Y. Du, Y. Guendouz, L. Wei, C. Mazo, B. A. Becker, and C. Mooney. Current Challenges and Future Opportunities for XAI in Machine Learning-Based Clinical Decision Support Systems: A Systematic Review. Applied Sciences, 2021. [2] W. T. Hrinivich, T. Wang, and C. Wang. Interpretable and explainable machine learning models in oncology. Frontiers in Oncology, 2023. [3] S. Khedkar, P. Gandi, G. Shinde, and V. Subramanian. Deep Learning and Explainable AI in Healthcare Using EHR. Deep Learning Techniques for Biomedical and Health Informatics, 2020. --- Rebuttal Comment 1.1: Comment: Thanks for the author's detailed response, which has addressed most of my concerns. The work's contribution is commendable, especially considering it as a first attempt in explaining stability. Therefore, I will increase my score to 5.
null
null
null
null
Aiming towards the minimizers: fast convergence of SGD for overparametrized problems
Accept (poster)
Summary: This work proposes a new condition called the aiming condition, which looks similar to quasar-convexity but provides fundamentally different convergence guarantees for SGD. Under the aiming condition, along with several other regularity conditions, SGD can achieve the same sample complexity as GD. It is then shown that wide neural networks enjoys this property with high probability. Strengths: - The aiming condition is impressive as a condition for SGD to achieve the same sample complexity as GD. - The presentation of the results is clear. This work embodies a lot of results, which could potentially make the paper hard to follow. However, the introduction part provides a clear roadmap. Weaknesses: - I am expecting more discussions on the comparison of the aiming condition against existing conditions (e.g. quasar-convexity).This involves two aspects: 1. I would like to see some more examples where the aiming condition is satisfies but quasar-convexity does not hold; 2. It would be interesting to provide some intuition about why quasar-convexity cannot provide a similar result. - This work does not seem to be enough strong technically. - This work has some minor problems presentations. For example, the same sentence appears twice in Line 62 and Line 160; Furthermore, in Line 251, the radius needs to be scaled up rather than shrunk. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I am mainly curious about the comparison of quasar-convexity and the aiming condition (see "Weaknesses" above). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This work, with a theorical nature, does not have potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feeback. We address your concerns below. **W1:** *I am expecting more discussions on the comparison of the aiming condition against existing conditions (e.g. quasar-convexity).This involves two aspects: 1. I would like to see some more examples where the aiming condition is satisfies but quasar-convexity does not hold; 2. It would be interesting to provide some intuition about why quasar-convexity cannot provide a similar result.* **A:** Thank you for bringing attention to this point. We will now argue that in the interpolation setting, quasar-convexity is a very restrictive condition because it requires the set of zero-loss solutions to be a linear subspace, which is usually not the case (see for example [1]). To be precise, quasar-convexity relative to a point $\bar w$ with $L(\bar w)=0$ means that $\langle \nabla L(w),w-\bar w\rangle\geq \theta\cdot L(w)$ for all $w$ near $\bar w$. Aiming, in contrast, stipulates the analogous inequality but with $\bar w$ crucially replaced by $proj_S(w)$, where $S$ is the set of zero loss solutions. To see the limitation of quasar-convexity, note that quasar-convexity implies that $S$ is star-convex relative to $\bar w$ (Observation 3 in reference [13] in the paper). That is, there exists $\epsilon>0$ such that for any $w\in B_\epsilon(\bar w)\cap S$, the line segment joining $w$ and $\bar w$ is fully contained in $S$. This is a very restrictive assumption. For example, a smooth-manifold $S$ is star convex near $\bar w$ if and only if it coincides with a linear subspace around $\bar w$. The PL-condition in turn implies by Theorem 2.16 in [2] that $S$ is a smooth manifold. Therefore, if loss functions for training wide neural networks were quasar-convex, then the set of interpolating solutions would form a linear space on any compact set, which is certainly not true. In contrast, as we have proved, on any fixed radius ball around the initial point, sufficiently wide neural networks satisfy the aiming condition. In parallel, we also showed that any $C^3$-smooth function satisfying PL automatically satisfies the aiming inequality near a point $\bar w\in S$. In summary, quasar-convexity does not hold when training wide neural networks (since otherwise it would imply that interpolating solutions form a linear subspace), while aiming provably holds---one of our main results. We will add a discussion along these lines to the revision. [1] Liu, Zhu, Belkin, Loss landscapes and optimization in over-parameterized non-linear systems and neural networks, ACHA, 2022. [2] Rebjock, Boumal, Fast convergence to non-isolated minima: four equivalent conditions for $C^2$ functions, arxiv.org/pdf/2303.00096.pdf. **W2:** *This work does not seem to be enough strong technically.* **A:** In this paper, we address a fundamental question of interest both for optimization and machine learning: why is that numerically the iteration complexity of SGD is comparable to GD when training wide neural networks. In order to answer this question, we introduced a number of new analysis techniques. First, the proof that neural networks satisfy aiming and uniform aiming---new conditions introduced in the paper---relied on some careful computations with nonlinear least squares and the transition to linearity phenomenon. Second, the proof of convergence of SGD relied on decomposing the iterate trajectories into "tangent" and "orthogonal" components to the solution set. The aiming condition ensured that the iterates make steady progress in orthogonal directions towards while uniform aiming controlled the tangent motion. Third, using a novel martingale stopping time argument together with this decomposition, we bounded the probability that the iterates escape the ball. We expect that such an orthogonal decomposition together with a stopping time argument to be useful more broadly. With these new techniques, we were able to contribute the following novelties. We proved that sufficiently wide neural networks satisfy our proposed aiming condition. Note that the width requirement we have is the same as that in establishing convergence of GD in prior work; hence, we are not imposing stronger assumptions. Second, we proved that aiming, interpolation, and quadratic growth conditions enable application of SGD with a large step size and fast linear rate. Indeed, Section 1.2 explains in detail that the linear rate we obtain is much faster than existing convergence guarantees for SGD. We believe that this paper is strong technically, and the developed techniques and regularity conditions will be useful for understanding optimization algorithms and landscapes of loss functions in nonconvex optimization and learning. **W3:** *The same sentence appears twice in Line 62 and Line 160; Furthermore, in Line 251, the radius needs to be scaled up rather than shrunk.* **A:** Thank you for pointing this out. We will correct them in the revision and do a thorough pass through the paper. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: I would like to thank the authors for the detailed rebuttal. I certainly had some misunderstandings when writing the initial review, and I appreciate the authors for the great explanation. I will vote for this work being accepted.
Summary: This work shows a regularization condition for SGD in the interpolation regime which allows it to have same fast linear convergence rate as deterministic gradient descent. Hence, the theory presented in this paper supports the practical observation that with the same (large) learning rate, mini-batch SGD has almost the same convergence rate of SGD. This goes against the traditional approaches for SGD convergence under PL condition that require SGD to have smaller step-size than GD, hence making SGD's convergence slower in theory. The authors here present the conditions in which SGD have similar iteration complexity as GD. Strengths: 1) The main strength of the work lies in it's novelty. The motivation of the problem seems clear from the introduction. 2) This is an important problem since it is crucial to have a large learning rate for SGD as it helps improving generalization. Hence, the current theories that analyze SGD under PL-condition and requires it to have a smaller l.r, may suffer from having good generalization. The theory presented by the authors allows the same large learning rate to be used as GD. Hence, the variance of SGD coupled with the alowable large learning rate can boost generalization. Weaknesses: 1) Looking at the theorems, I still feel the assumptions made are too strong too hold in a non-convex landscape especially Aiming. For theorem-1.2, which consdiers a non-convex loss landscape on $w$, it is unclear how aiming can hold on a ball of radius $r$ and it's relation to the curvature at that point (given by minimal eigenvalue of NTK). The authors make sure that the iterates remain inside a ball $B_{r}(w_{0})$ near the minima but the radius depends on the eigenvalue of NTK at initializaiton! This makes a very strong assumption that the initial point is set to be very close to the true minima $w_{0}$. More so, if initialization is made far away from the minima, there is a high probability that aiming does not hold. 2) From the theorems, it is unclear that how the stochasticity from the mini-batch gradients effect the convergence or what is it's effect in ensuring that the iterates don't escpae $B_{r}(w_{0})$. For theorem-2.3, I assume that it will take longer iterations if the variance in mini-batch gradient is high. It is not very clear from the inequality as how the relation is. A minor correction: 3) On line-42, it might be better to cite a different source such as [1] to refer to the generalization effect of large learning rate, as the authors of "Edge of stability" themselves mention that they don't claim the generalization effect of large learning rate but rather focus on the stability effect. See https://youtu.be/6xeh6gfESuc?t=3016 [1] Li, Yuanzhi, Colin Wei, and Tengyu Ma. "Towards explaining the regularization effect of initial large learning rate in training neural networks." Advances in Neural Information Processing Systems 32 (2019). Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See points 1 and 2. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The main limitation of this work is a strong assumption on the locality of the SGD analysis. Some further clarification would be helpful. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback. We address the concerns below: **W1:** *Looking at the theorems, I still feel the assumptions made are too strong too hold in a non-convex landscape especially Aiming. For theorem-1.2, which consdiers a non-convex loss landscape on $w$, it is unclear how aiming can hold on a ball of radius $r$ and it's relation to the curvature at that point (given by minimal eigenvalue of NTK). The authors make sure that the iterates remain inside a ball $B_r(w_0)$ near the minima but the radius depends on the eigenvalue of NTK at initializaiton! This makes a very strong assumption that the initial point is set to be very close to the true minima $w_0$. More so, if initialization is made far away from the minima, there is a high probability that aiming does not hold.* **A:** Thank you for bringing this to our attention; we can see where the confusion arises in the informal statement of Theorem 1.2. In the statement of the theorem, one can replace $\lambda$---the minimal eigenvalue of the NTK at initialization---by the the minimal eigenvalue $\lambda_{\infty}$ of the NTK of an infinitely wide neural network, which *does not* depend on the initialization. See the discussion on page 7 for a precise statement. The two theorem statements are equivalent because as the width increases, the parameter $\lambda$ approaches $\lambda_{\infty}$. More precisely, in the regime $m=\Omega(nr^{6l+2}/\lambda_{\infty}^2)$, with high probability one has $\lambda\geq \lambda_{\infty}/2$. Therefore the requirement on the radius $r= \Omega(1/\lambda \delta^2)$ is independent of the initialization, with high probability. Summarizing, *for any fixed radius $r= \Omega(1/\lambda_{\infty} \delta^2)$, one may take the width sufficiently large $m=\Omega(nr^{6l+2}/\lambda_{\infty}^2)$, so that with probability roughly $1-\delta$ aiming holds on the entire ball $B_r(w_0)$ and the SGD iterates converge at the linear rate $O(\exp(-t\lambda_{\infty}))$.* We emphasize that in this parameter regime (i.e. when $m$ is sufficiently large), the ball $B_r(w_0)$ is in fact large enough to contain the whole optimization path of gradient descent (see for example Section 5 in [1]), and the initial point need *not* be close to the true minima. That means, even if "initialization is made far away from the minima", $B_r(w_0)$ still covers both the initialization and the global minima. Hence, the aiming condition would hold throughout the training process. We will modify the statement of the theorem and the preceding discussion to make these points clear. We hope that this explanation convinces the reviewer that aiming is indeed a reasonable condition. [1] Liu, Zhu, Belkin, Loss landscapes and optimization in over-parameterized non-linear systems and neural networks, ACHA, 2022. **W2:** *From the theorems, it is unclear that how the stochasticity from the mini-batch gradients effect the convergence or what is it's effect in ensuring that the iterates don't escape $B_r(w_0)$. For theorem-2.3, I assume that it will take longer iterations if the variance in mini-batch gradient is high. It is not very clear from the inequality as how the relation is.* **A:** Thank you for the question. Interpolation implies the basic bound $E[||\nabla \ell(w,z)||^2] \le 2\beta \cdot L(w)$, where $\beta$ is the smoothness constant of the loss functions (Line 408 of our appendix). Thus the second moment of the stochastic gradient (and its variance) tends to zero as the iterates approach the solution set $S$. The "rate" at which the variance tends to zero is controlled by $\beta$. Now, if one wants to use this bound in order to ensure that the expected function value $E[L(w_t)]$ contracts in each iteration, one would need to use a tiny learning rate $\eta=\frac{\alpha}{\beta^2}$ leading to a slow rate of convergence (as simple examples show). To overcome this difficulty, we focus on the contraction in the squared distance $E[dist^2(w_t,S)]$. The aiming condition together with the aforementioned bound on the variance ensures that $E[dist^2(w_t,S)]$ indeed contracts in each iteration when using a stepsize on the order $\eta=\frac{\theta}{\beta}$ and while the iterates remain in $B_r(w)$. That is, the the iterates makes very good progress moving "orthogonally" towards $S$. It remains to bound the probability that the iterates escape the ball $B_r(w)$. This bit of the argument is involved. We first establish an auxiliary condition called uniform aiming (page 6) for wide NNs, and use it to bound the "tangent motion" relative to $S$. A careful stopping time argument then bounds the probability of escaping the ball. We will add a more detailed discussion to the revision. **W3:** *On line-42, it might be better to cite a different source such as [1] to refer to the generalization effect of large learning rate$\ldots$* **A:** Thank you for pointing this out. We will cite this reference instead. --- Rebuttal Comment 1.1: Comment: i acknowledge that I have read the author's response and they have answered my concerns. I intend to keep my score.
Summary: This work presents a set of conditions under which the convergence rate of SGD with large step size is similar to that of gradient descent (the deterministic setting). This is in contrast to prior work, where the convergence of over-parameterized SGD under PL condition require small step size, and converge slowly as a result of this small step size. Strengths: 1. The paper is well-motivated and clearly organized. This work provides a theoretical result that improve previous analyses of the linear convergence rate of overparameterized SGD. 2. The paper thoroughly compares its result to prior works mentioned. 3. The paper attempts verifies assumptions that it makes. Weaknesses: 1. The paper states that its goal is to improve stepsize selection and convergence rate of SGD for nonconvex problems under certain conditions (see lines 133-134). However, the paper does not directly do this. 2. This work claims to be the first to present an analysis of the convergence rate of over-parameterized SGD with large step size which is similar the convergence rate attained by gradient descent,. However, prior work touches upon this as well (see “Observations in simplified settings” in The Impact of Neural Network Overparameterization on Gradient Confusion and Stochastic Gradient Descent by Sankararaman, De, Xu, Huang, and Goldstein). Perhaps it is worth connecting both works to further demonstrate this paper’s contribution. (Please correct me if I am mistaken.) 3. While the contribution of this paper is predominantly theoretical, I believe that this work could benefit from some empirical evaluation as well. Current experiments are fairly limited (MNIST is the only dataset used and is easy to learn. Additionally, model architecture is also quite limited. Using other datasets e.g. CIFAR10/100 or running the experiments for networks with varying widths might be useful). Technical Quality: 3 good Clarity: 3 good Questions for Authors: There is no figure 2 Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have described the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments. We address your concerns below in detail. **W1:** *The paper states that its goal is to improve stepsize selection and convergence rate of SGD for nonconvex problems under certain conditions (see lines 133-134). However, the paper does not directly do this.* **A:** Apologies; this statement was not carefully stated. In line 133-134, we mean to state our goal of obtaining *theoretically acceptable* stepsizes, which better match the large stepsizes used in practice. In this paper, we successfully showed that, under certain conditions (including aiming and PL), our theory indeed allows larger stepsizes than prior works and leads to a significantly faster convergence rate, as explained in Section 1.2. Moreover, we proved that these conditions hold for sufficiently wide neural networks. **W2:** *This work claims to be the first to present an analysis of the convergence rate of over-parameterized SGD with large step size which is similar the convergence rate attained by gradient descent. However, prior work touches upon this as well (see “Observations in simplified settings” in The Impact of Neural Network Overparameterization on Gradient Confusion and Stochastic Gradient Descent by Sankararaman, De, Xu, Huang, and Goldstein). Perhaps it is worth connecting both works to further demonstrate this paper’s contribution. (Please correct me if I am mistaken.)* **A:** Thank you for pointing out this reference (We call it "the reference" later on). We agree that this reference is relevant and indeed has some discussion on large step size for SGD. We will be sure to cite it in our revision and include a comparison. We summarize the differences as follows: 1. (PL condition for each summand) The reference requires that every loss function $f_i$, $i\in[N]$, satisfies the $\mu$-PL condition (see (A2) in page 3 of the reference), which is {\em much stronger} than the usual PL condition for the ERM loss $F=\frac{1}{N}\sum_{i=1}^N f_i$, which we use in our paper. Indeed, it is not clear what the limiting value of $\mu$ in the reference would be as the width increases to infinity. In contrast, the PL constant that appears in our bounds is the minimal eigenvalue of the NTK of the infinitely wide neural network. 2. (slower convergence rate) According to Theorem 3.1 in the reference, the required stepsize is $\alpha=1/N L$, which leads to a slow rate of convergence $O(\frac{N^2L}{\mu}\log(1/\epsilon))$ to a fixed constant loss value $\epsilon\approx \eta N/\mu$, which itself can be quite large. Here, $\eta$ is a gradient confusion constant. In contrast, the number of samples $N$ does not influence the convergence rate we obtain, and we prove fast convergence to the global minimum. Note, that the reference proves that a small $\eta<4$ is a valid gradient confusion constant with high probability only when the data are sampled from a unit sphere. **W3:** *... Current experiments are fairly limited (MNIST is the only dataset used and is easy to learn. Additionally, model architecture is also quite limited. Using other datasets e.g. CIFAR10/100 or running the experiments for networks with varying widths might be useful).* **A:** Thanks for the comment. We conducted some preliminary experiments on CIFAR-10 dataset for additional empirical evaluations. Please see the results and settings in the response to all reviewers above. Figures are shown in the pdf file attached to it. More extensive numerical evaluations are currently running and will be included in the revision. From what we can tell so far, the qualitative results we see on MNIST are also visible on CIFAR10, with the main difference that both GD and SGD are *both* much slower, as one expects. **Q1:** *There is no figure 2.* **A:** This is the figure on top of page 4; we will add a label to this figure in the revision. --- Rebuttal Comment 1.1: Comment: I acknowledge and appreciate the authors' responses. I intend to keep my score.
Summary: This paper studies the convergence of SGD with large step size. It is shown that under some regularity conditions, SGD enjoys a fast linear convergence rate, both in expectation and with high probability. These results can be applied to show fast convergence of SGD for wide enough feed-forward networks. Strengths: The paper is well organized and easy to follow, and especially the authors have compared in detail the assumptions/results with those in the existing papers. It is indeed an important question to study the convergence of SGD with large step sizes, and the results in this paper seem novel and interesting. Weaknesses: 1. The main application of the results is for wide enough neural networks in the NTK regime, which seems restrictive. 2. It would be helpful if the authors can highlight the technical novelties. 3. It is discussed around line 40-42 that large batch sizes could be beneficial for generalization. However, the current results don't seem to have such implications. In particular, I don't think the regime studied in the current paper is a case of the "edge of stability" regime. 4. Is it possible to verify the proposed conditions for common neural network models, at least empirically? Otherwise it's not clear if the developed theory reflects what happens in practice. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the question above. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. We address the concerns as follows: **W1:** *The main application of the results is for wide enough neural networks in the NTK regime, which seems restrictive.* **A:** We would like to point out that the ”NTK regime“ setting is in line with much of the literature of optimization theories for neural networks. Nearly all convergence results for GD/SGD for neural networks so far are in the NTK regime (or more precisely, require sufficiently large network width) (see, e.g., [1-5]). To the best of our knowledge, there is no general theory for convergence of GD/SGD outside of large width regimes. The goal of our work is to improve the existing results on convergence of SGD by showing essentially the same rate of convergence as GD (something which has been empirically observed) and thus requires a setting where the existing analyses of GD exists. Furthermore, the large width regime should logically be the first to be explored before further progress can be made. [1] Allen-Zhu, Li, Song. A Convergence Theory for Deep Learning via Over-Parameterization. ICML. 2019. [2] Du, Zhai, Poczos, Singh. Gradient Descent Provably Optimizes Over-parameterized Neural Networks. ICLR, 2018. [3] Zou, Cao, Zhou, Gu. Gradient descent optimizes overparameterized deep ReLU networks. Machine Learning, 2020. [4] Oymak, Soltanolkotabi. Toward moderate overparameterization: Global convergence guarantees for training shallow neural networks. : IEEE Journal on Selected Areas in Information Theory, 2020. [5] Liu, Zhu, Belkin, Loss landscapes and optimization in over-parameterized non-linear systems and neural networks. ACHA, 2022. **W2:** *It would be helpful if the authors can highlight the technical novelties.* **A:** Thanks for the suggestion. We will include a more comprehensive summary of contributions in the revision. From a high level viewpoint, we (1) introduced the aiming condition and (2) proved that sufficiently wide neural networks satisfy the aiming condition. Note that the width requirement we have is the same as that in establishing convergence of GD in prior work, e.g., [1]; hence, we are not imposing stronger assumptions. Second, we proved that aiming, interpolation, and quadratic growth conditions enable application of SGD with a large step size and fast linear rate. Indeed, Section 1.2 explains in detail that the linear rate we obtain is much faster than existing convergence guarantees for SGD. From a technical viewpoint, we introduced a number of new analysis techniques. First, the proof that neural networks satisfy aiming and uniform aiming relied on some careful computations with nonlinear least squares and the transition to linearity phenomenon. Second, the proof of convergence of SGD relied on decomposing the iterate trajectories into “tangent” and “orthogonal” components to the solution set $S$. The aiming condition ensured that the iterates make steady progress in orthogonal directions towards $S$ while uniform aiming controlled the tangent motion. Third, using this decomposition together with a novel martingale stopping time argument, we bounded the probability that the iterates escape the ball. We expect that such an orthogonal decomposition with a stopping time argument to be useful more broadly. [1] Liu, Zhu, Belkin, Loss landscapes and optimization in over-parameterized non-linear systems and neural networks, ACHA, 2022. **W3:** *It is discussed around line 40-42 that large batch sizes could be beneficial for generalization. However, the current results don't seem to have such implications. In particular, I don't think the regime studied in the current paper is a case of the "edge of stability" regime.* **A:** Thank you. This comment, made in passing, may have caused some confusion. We discuss generalization and the "edge of stability" as the motivations for understanding the training efficiency of SGD with large stepsize. We don't claim that our analysis implies better generalization or that it is in the "edge of stability" regime. Instead, we focus on the optimization performance of SGD with large step size on the training loss only. As we have stated in Section 1.1 (outline of main results) and Section 1.2 (comparison to existing work), our goals are to improve the theoretically allowed stepsize range (especially to include the practically used large stepsizes), and to deduce faster convergence rate on the training loss. Apologies for the confusion; we will make this point clear in the revision. **W4** *Is it possible to verify the proposed conditions for common neural network models, at least empirically?* **A:** Thank you; this is a good question. The difficulty is that the aiming condition involves $proj_S(w)$---the nearest point of $w$ to the solution set $S$---and which we have no way of computing. Instead, we performed the following experiment on MNIST and CIFAR-10 datasets. We let SGD run and convergence to some point $\bar w$. Then we plot the quotients $\hat \theta_t\triangleq\langle \nabla L(w_t),w_t-\bar w\rangle/L(w_t)$ along the iterate path. Note that positivity of $\hat \theta_t$ suggests that aiming holds, but does not verify it exactly. Indeed, for MNIST we see that $\hat\theta_t>0.5$ while for CIFAR $\hat\theta_t>0.05$. See the results in the response to all reviewers above. Figures are shown in the pdf file attached to it. We will include the empirical verification in the revision. We note that it may seem that the experiments verify the related quasar-convexity property, but this *not* the case because $\bar w$ is a *random point* that depends on the sample path taken by SGD. --- Rebuttal Comment 1.1: Title: Response to the authors Comment: I thank the authors for the detailed response. I don't have further questions.
Rebuttal 1: Rebuttal: We provide additional experimental results here, requested by the reviewers. Please see the other rebuttals below, directly after each review. **1: An estimate of aiming condition:** In principle, the aiming condition is difficult to verify, as it involves $proj_S(w)$---the nearest point of $w$ to the solution set $S$---and which we have no way of computing. Instead, we replace $proj_S(w)$ by the last iterate $\bar{w}$ of an SGD run, and compute an estimate of the aiming condition coefficient by the quotients $\hat \theta_t\triangleq\langle \nabla L(w_t),w_t-\bar w\rangle/L(w_t)$ along the iterate path. Note that this quotient is not the same quotient as would appear in the definition of quasar-convexity because $\bar w$ is a random point that depends on the iterate path taken by SGD. *Settings:* We conduct the experiments on two datasets, MNIST and CIFAR-10. For MNIST, we trained a $3$-hidden layer fully-connected neural network (width$=1000$). First, we train the network until convergence (loss function value $< 10^{-4}$), and record the parameters of this trained network as the optimal solution $\bar{w}$. For CIFAR-10, we train a two-layer CNN for $4000$ iterations $^\dagger$, and record the final parameter setting as $\bar{w}$. Then, for each experiment, we took a second run, and at each iteration $w_t$ we compute the estimate of the aiming condition coefficient $\hat \theta_t$ using the equation mentioned above. *Results:* Figure 1 and Figure 2 in the attached pdf file show the plots of $\hat \theta_t$ against the iteration number. In both cases, we observe that the estimate $\hat \theta_t$ stays positive, which suggests that aiming holds. We will include the empirical verification in the revision. **2: Mini-batch SGD on CIFAR-10:** In this experiment, we show that, with the same number of iterations, mini-batch SGD has almost the same convergence behavior as the full batch GD on the CIFAR-10 dataset (same as in Figure 1 for MNIST). *Setting:* We run mini-batch SGD with different batch sizes $b$ ($b=60, 120, 250, 500, 1000$), as well as full batch GD, on a two-layer CNN with CIFAR-10 dataset for $4000$ iterations $^\dagger$. *Results:* Figure 3 in the attached pdf file shows the training loss curves of the mini-batch SGD and full batch GD. We observe that with the same number of iterations, mini-batch SGD has almost the same convergence behavior as the full batch GD. $\dagger$: We didn't train this CNN until convergence, as the convergence is slow for CIFAR-10 on this simple CNN. We are currently running experiments on deeper networks so that a near-zero training loss could be obtained. We will include it in the revision. Pdf: /pdf/cc5e0a158d99f3d317f8386303266b1e503a1730.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null